NASA Astrophysics Data System (ADS)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2010-04-01
For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.
NASA Astrophysics Data System (ADS)
Sehad, Mounir; Lazri, Mourad; Ameur, Soltane
2017-03-01
In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.
Lee, Soojeong; Chang, Joon-Hyuk
2017-11-01
This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57 mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.
Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominantlco-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...
Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominant/co-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.
2017-08-01
The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Comparison of five canopy cover estimation techniques in the western Oregon Cascades.
Anne C.S. Fiala; Steven L. Garman; Andrew N. Gray
2006-01-01
Estimates of forest canopy cover are widely used in forest research and management, yet methods used to quantify canopy cover and the estimates they provide vary greatly. Four commonly used ground-based techniques for estimating overstory cover - line-intercept, spherical densiometer, moosehorn, and hemispherical photography - and cover estimates generated from crown...
The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Yan, Tsun-Yee
2000-01-01
This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems
NASA Astrophysics Data System (ADS)
Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron
2017-12-01
This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.
Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques
Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.
2013-01-01
Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
NASA Astrophysics Data System (ADS)
Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon
This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.
A Biomechanical Modeling Guided CBCT Estimation Technique
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-01-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks. PMID:27831866
Heer, D M; Passel, J F
1987-01-01
This article compares 2 different methods for estimating the number of undocumented Mexican adults in Los Angeles County. The 1st method, the survey-based method, uses a combination of 1980 census data and the results of a survey conducted in Los Angeles County in 1980 and 1981. A sample was selected from babies born in Los Angeles County who had a mother or father of Mexican origin. The survey included questions about the legal status of the baby's parents and certain other relatives. The resulting estimates of undocumented Mexican immigrants are for males aged 18-44 and females aged 18-39. The 2nd method, the residual method, involves comparison of census figures for aliens counted with estimates of legally-resident aliens developed principally with data from the Immigration and Naturalization Service (INS). For this study, estimates by age, sex, and period of entry were produced for persons born in Mexico and living in Los Angeles County. The results of this research indicate that it is possible to measure undocumented immigration with different techniques, yet obtain results that are similar. Both techniques presented here are limited in that they represent estimates of undocumented aliens based on the 1980 census. The number of additional undocumented aliens not counted remains a subject of conjecture. The fact that the proportions undocumented shown in the survey (228,700) are quite similar to the residual estimates (317,800) suggests that the number of undocumented aliens not counted in the census may not be an extremely large fraction of the undocumented population. The survey-based estimates have some significant advantages over the residual estimates. The survey provides tabulations of the undocumented population by characteristics other than the limited demographic information provided by the residual technique. On the other hand, the survey-based estimates require that a survey be conducted and, if national or regional estimates are called for, they may require a number of surveys. The residual technique, however, also requires a data source other than the census. However, the INS discontinued the annual registration of aliens after 1981. Thus, estimates of undocumented aliens based on the residual technique will probably not be possible for subnational areas using the 1990 census unless the registration program is reinstituted. Perhaps the best information on the undocumented population in the 1990 census will come from an improved version of the survey-based technique described here applied in selected local areas.
NASA Technical Reports Server (NTRS)
Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh
1992-01-01
A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.
1982-02-01
For these data elements, Initial Milestone 11 values were established as the Flanning Estimate (PE) with the Development Estimate ( DE ) to he based ...development of improved forensic collection techniques for Naval Investigative Agents on ships and overseas bases . As this is a continuing program, the above...overseas bases ), and continue development of improved forensic collection techniques for Naval Investigative Agents on ships and overseas baszs. 4. (U) FY
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, A H; Kerr, L A; Cailliet, G M
2007-11-04
Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less
Weighted image de-fogging using luminance dark prior
NASA Astrophysics Data System (ADS)
Kansal, Isha; Kasana, Singara Singh
2017-10-01
In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
Tumor response estimation in radar-based microwave breast cancer detection.
Kurrant, Douglas J; Fear, Elise C; Westwick, David T
2008-12-01
Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.
Three-dimensional ultrasound strain imaging of skeletal muscles
NASA Astrophysics Data System (ADS)
Gijsbertse, K.; Sprengers, A. M. J.; Nillesen, M. M.; Hansen, H. H. G.; Lopata, R. G. P.; Verdonschot, N.; de Korte, C. L.
2017-01-01
In this study, a multi-dimensional strain estimation method is presented to assess local relative deformation in three orthogonal directions in 3D space of skeletal muscles during voluntary contractions. A rigid translation and compressive deformation of a block phantom, that mimics muscle contraction, is used as experimental validation of the 3D technique and to compare its performance with respect to a 2D based technique. Axial, lateral and (in case of 3D) elevational displacements are estimated using a cross-correlation based displacement estimation algorithm. After transformation of the displacements to a Cartesian coordinate system, strain is derived using a least-squares strain estimator. The performance of both methods is compared by calculating the root-mean-squared error of the estimated displacements with the calculated theoretical displacements of the phantom experiments. We observe that the 3D technique delivers more accurate displacement estimations compared to the 2D technique, especially in the translation experiment where out-of-plane motion hampers the 2D technique. In vivo application of the 3D technique in the musculus vastus intermedius shows good resemblance between measured strain and the force pattern. Similarity of the strain curves of repetitive measurements indicates the reproducibility of voluntary contractions. These results indicate that 3D ultrasound is a valuable imaging tool to quantify complex tissue motion, especially when there is motion in three directions, which results in out-of-plane errors for 2D techniques.
A spline-based parameter and state estimation technique for static models of elastic surfaces
NASA Technical Reports Server (NTRS)
Banks, H. T.; Daniel, P. L.; Armstrong, E. S.
1983-01-01
Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.
Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA)...
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Satellite angular velocity estimation based on star images and optical flow techniques.
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-09-25
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.
Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-01-01
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023
Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas
2017-12-01
Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.
National scale biomass estimators for United States tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2003-01-01
Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...
Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S
2016-03-08
Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.
Estimating propagation velocity through a surface acoustic wave sensor
Xu, Wenyuan; Huizinga, John S.
2010-03-16
Techniques are described for estimating the propagation velocity through a surface acoustic wave sensor. In particular, techniques which measure and exploit a proper segment of phase frequency response of the surface acoustic wave sensor are described for use as a basis of bacterial detection by the sensor. As described, use of velocity estimation based on a proper segment of phase frequency response has advantages over conventional techniques that use phase shift as the basis for detection.
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier
NASA Astrophysics Data System (ADS)
Dalhoff, Ernst; Gummer, Anthony W.
2011-11-01
Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.
NASA Astrophysics Data System (ADS)
Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali
2017-12-01
This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.
A spline-based parameter estimation technique for static models of elastic structures
NASA Technical Reports Server (NTRS)
Dutt, P.; Taasan, S.
1986-01-01
The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.
Stegman, Kelly J; Park, Edward J; Dechev, Nikolai
2012-07-01
The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Wang, Li-Pen; Ochoa-Rodríguez, Susana; Simões, Nuno Eduardo; Onof, Christian; Maksimović, Cedo
2013-01-01
The applicability of the operational radar and raingauge networks for urban hydrology is insufficient. Radar rainfall estimates provide a good description of the spatiotemporal variability of rainfall; however, their accuracy is in general insufficient. It is therefore necessary to adjust radar measurements using raingauge data, which provide accurate point rainfall information. Several gauge-based radar rainfall adjustment techniques have been developed and mainly applied at coarser spatial and temporal scales; however, their suitability for small-scale urban hydrology is seldom explored. In this paper a review of gauge-based adjustment techniques is first provided. After that, two techniques, respectively based upon the ideas of mean bias reduction and error variance minimisation, were selected and tested using as case study an urban catchment (∼8.65 km(2)) in North-East London. The radar rainfall estimates of four historical events (2010-2012) were adjusted using in situ raingauge estimates and the adjusted rainfall fields were applied to the hydraulic model of the study area. The results show that both techniques can effectively reduce mean bias; however, the technique based upon error variance minimisation can in general better reproduce the spatial and temporal variability of rainfall, which proved to have a significant impact on the subsequent hydraulic outputs. This suggests that error variance minimisation based methods may be more appropriate for urban-scale hydrological applications.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Applications of physiological bases of ageing to forensic sciences. Estimation of age-at-death.
C Zapico, Sara; Ubelaker, Douglas H
2013-03-01
Age-at-death estimation is one of the main challenges in forensic sciences since it contributes to the identification of individuals. There are many anthropological techniques to estimate the age at death in children and adults. However, in adults this methodology is less accurate and requires population specific references. For that reason, new methodologies have been developed. Biochemical methods are based on the natural process of ageing, which induces different biochemical changes that lead to alterations in cells and tissues. In this review, we describe different attempts to estimate the age in adults based on these changes. Chemical approaches imply modifications in molecules or accumulation of some products. Molecular biology approaches analyze the modifications in DNA and chromosomes. Although the most accurate technique appears to be aspartic acid racemization, it is important to take into account the other techniques because the forensic context and the human remains available will determine the possibility to apply one or another methodology. Copyright © 2013 Elsevier B.V. All rights reserved.
A Novel Rules Based Approach for Estimating Software Birthmark
Binti Alias, Norma; Anwar, Sajid
2015-01-01
Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363
Using the Delphi technique in economic evaluation: time to revisit the oracle?
Simoens, S
2006-12-01
Although the Delphi technique has been commonly used as a data source in medical and health services research, its application in economic evaluation of medicines has been more limited. The aim of this study was to describe the methodology of the Delphi technique, to present a case for using the technique in economic evaluation, and to provide recommendations to improve such use. The literature was accessed through MEDLINE focusing on studies discussing the methodology of the Delphi technique and economic evaluations of medicines using the Delphi technique. The Delphi technique can be used to provide estimates of health care resources required and to modify such estimates when making inter-country comparisons. The Delphi technique can also contribute to mapping the treatment process under investigation, to identifying the appropriate comparator to be used, and to ensuring that the economic evaluation estimates cost-effectiveness rather than cost-efficacy. Ideally, economic evaluations of medicines should be based on real-patient data. In the absence of such data, evaluations need to incorporate the best evidence available by employing approaches such as the Delphi technique. Evaluations based on this approach should state the limitations, and explore the impact of the associated uncertainty in the results.
Telis, Pamela A.
1992-01-01
Mississippi State water laws require that the 7-day, 10-year low-flow characteristic (7Q10) of streams be used as a criterion for issuing wastedischarge permits to dischargers to streams and for limiting withdrawals of water from streams. This report presents techniques for estimating the 7Q10 for ungaged sites on streams in Mississippi based on the availability of baseflow discharge measurements at the site, location of nearby gaged sites on the same stream, and drainage area of the ungaged site. These techniques may be used to estimate the 7Q10 at sites on natural, unregulated or partially regulated, and non-tidal streams. Low-flow characteristics for streams in the Mississippi River alluvial plain were not estimated because the annual lowflow data exhibit decreasing trends with time. Also presented are estimates of the 7Q10 for 493 gaged sites on Mississippi streams.Techniques for estimating the 7Q10 have been developed for ungaged sites with base-flow discharge measurements, for ungaged sites on gaged streams, and for ungaged sites on ungaged streams. For an ungaged site with one or more base-flow discharge measurements, base-flow discharge data at the ungaged site are related to concurrent discharge data at a nearby gaged site. For ungaged sites on gaged streams, several methods of transferring the 7Q10 from a gaged site to an ungaged site were developed; the resulting 7Q10 values are based on drainage area prorations for the sites. For ungaged sites on ungaged streams, the 7Q10 is estimated from a map developed for. this study that shows the unit 7Q10 (7Q10 per square mile of drainage area) for ungaged basins in the State. The mapped values were estimated from the unit 7Q10 determined for nearby gaged basins, adjusted on the basis of the geology and topography of the ungaged basins.
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
Husak, G.J.; Marshall, M. T.; Michaelsen, J.; Pedreros, Diego; Funk, Christopher C.; Galu, G.
2008-01-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
NASA Astrophysics Data System (ADS)
Husak, G. J.; Marshall, M. T.; Michaelsen, J.; Pedreros, D.; Funk, C.; Galu, G.
2008-07-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
Green, W. Reed; Haggard, Brian E.
2001-01-01
Water-quality sampling consisting of every other month (bimonthly) routine sampling and storm event sampling (six storms annually) is used to estimate annual phosphorus and nitrogen loads at Illinois River south of Siloam Springs, Arkansas. Hydrograph separation allowed assessment of base-flow and surfacerunoff nutrient relations and yield. Discharge and nutrient relations indicate that water quality at Illinois River south of Siloam Springs, Arkansas, is affected by both point and nonpoint sources of contamination. Base-flow phosphorus concentrations decreased with increasing base-flow discharge indicating the dilution of phosphorus in water from point sources. Nitrogen concentrations increased with increasing base-flow discharge, indicating a predominant ground-water source. Nitrogen concentrations at higher base-flow discharges often were greater than median concentrations reported for ground water (from wells and springs) in the Springfield Plateau aquifer. Total estimated phosphorus and nitrogen annual loads for calendar year 1997-1999 using the regression techniques presented in this paper (35 samples) were similar to estimated loads derived from integration techniques (1,033 samples). Flow-weighted nutrient concentrations and nutrient yields at the Illinois River site were about 10 to 100 times greater than national averages for undeveloped basins and at North Sylamore Creek and Cossatot River (considered to be undeveloped basins in Arkansas). Total phosphorus and soluble reactive phosphorus were greater than 10 times and total nitrogen and dissolved nitrite plus nitrate were greater than 10 to 100 times the national and regional averages for undeveloped basins. These results demonstrate the utility of a strategy whereby samples are collected every other month and during selected storm events annually, with use of regression models to estimate nutrient loads. Annual loads of phosphorus and nitrogen estimated using regression techniques could provide similar results to estimates using integration techniques, with much less investment.
Optimizing focal plane electric field estimation for detecting exoplanets
NASA Astrophysics Data System (ADS)
Groff, T.; Kasdin, N. J.; Riggs, A. J. E.
Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.
Parrett, Charles; Hull, J.A.
1986-01-01
Once-monthly streamflow measurements were used to estimate selected percentile discharges on flow-duration curves of monthly mean discharge for 40 ungaged stream sites in the upper Yellowstone River basin in Montana. The estimation technique was a modification of the concurrent-discharge method previously described and used by H.C. Riggs to estimate annual mean discharge. The modified technique is based on the relationship of various mean seasonal discharges to the required discharges on the flow-duration curves. The mean seasonal discharges are estimated from the monthly streamflow measurements, and the percentile discharges are calculated from regression equations. The regression equations, developed from streamflow record at nine gaging stations, indicated a significant log-linear relationship between mean seasonal discharge and various percentile discharges. The technique was tested at two discontinued streamflow-gaging stations; the differences between estimated monthly discharges and those determined from the discharge record ranged from -31 to +27 percent at one site and from -14 to +85 percent at the other. The estimates at one site were unbiased, and the estimates at the other site were consistently larger than the recorded values. Based on the test results, the probable average error of the technique was + or - 30 percent for the 21 sites measured during the first year of the program and + or - 50 percent for the 19 sites measured during the second year. (USGS)
Using Deep Learning for Tropical Cyclone Intensity Estimation
NASA Astrophysics Data System (ADS)
Miller, J.; Maskey, M.; Berendes, T.
2017-12-01
Satellite-based techniques are the primary approach to estimating tropical cyclone (TC) intensity. Tropical cyclone warning centers worldwide still apply variants of the Dvorak technique for such estimations that include visual inspection of the satellite images. The National Hurricane Center (NHC) estimates about 10-20% uncertainty in its post analyses when only satellite-based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to TC intensity. With the ever-increasing quality and quantity of satellite observations of TCs, deep learning techniques designed to excel at pattern recognition have become more relevant in this area of study. In our current study, we aim to provide a fully objective approach to TC intensity estimation by utilizing deep learning in the form of a convolutional neural network trained to predict TC intensity (maximum sustained wind speed) using IR satellite imagery. Large amounts of training data are needed to train a convolutional neural network, so we use GOES IR images from historical tropical storms from the Atlantic and Pacific basins spanning years 2000 to 2015. Images are labeled using a special subset of the HURDAT2 dataset restricted to time periods with airborne reconnaissance data available in order to improve the quality of the HURDAT2 data. Results and the advantages of this technique are to be discussed.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
Combining Relevance Vector Machines and exponential regression for bearing residual life estimation
NASA Astrophysics Data System (ADS)
Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico
2012-08-01
In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.
NASA Technical Reports Server (NTRS)
York, P.; Labell, R. W.
1980-01-01
An aircraft wing weight estimating method based on a component buildup technique is described. A simplified analytically derived beam model, modified by a regression analysis, is used to estimate the wing box weight, utilizing a data base of 50 actual airplane wing weights. Factors representing materials and methods of construction were derived and incorporated into the basic wing box equations. Weight penalties to the wing box for fuel, engines, landing gear, stores and fold or pivot are also included. Methods for estimating the weight of additional items (secondary structure, control surfaces) have the option of using details available at the design stage (i.e., wing box area, flap area) or default values based on actual aircraft from the data base.
We compare biomass burning emissions estimates from four different techniques that use satellite based fire products to determine area burned over regional to global domains. Three of the techniques use active fire detections from polar-orbiting MODIS sensors and one uses detec...
A nonparametric clustering technique which estimates the number of clusters
NASA Technical Reports Server (NTRS)
Ramey, D. B.
1983-01-01
In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Estimation of Soil Moisture with L-band Multi-polarization Radar
NASA Technical Reports Server (NTRS)
Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.
2004-01-01
Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick
1987-01-01
The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.
Validation and application of Acoustic Mapping Velocimetry
NASA Astrophysics Data System (ADS)
Baranya, Sandor; Muste, Marian
2016-04-01
The goal of this paper is to introduce a novel methodology to estimate bedload transport in rivers based on an improved bedform tracking procedure. The measurement technique combines components and processing protocols from two contemporary nonintrusive instruments: acoustic and image-based. The bedform mapping is conducted with acoustic surveys while the estimation of the velocity of the bedforms is obtained with processing techniques pertaining to image-based velocimetry. The technique is therefore called Acoustic Mapping Velocimetry (AMV). The implementation of this technique produces a whole-field velocity map associated with the multi-directional bedform movement. Based on the calculated two-dimensional bedform migration velocity field, the bedload transport estimation is done using the Exner equation. A proof-of-concept experiment was performed to validate the AMV based bedload estimation in a laboratory flume at IIHR-Hydroscience & Engineering (IIHR). The bedform migration was analysed at three different flow discharges. Repeated bed geometry mapping, using a multiple transducer array (MTA), provided acoustic maps, which were post-processed with a particle image velocimetry (PIV) method. Bedload transport rates were calculated along longitudinal sections using the streamwise components of the bedform velocity vectors and the measured bedform heights. The bulk transport rates were compared with the results from concurrent direct physical samplings and acceptable agreement was found. As a first field implementation of the AMV an attempt was made to estimate bedload transport for a section of the Ohio river in the United States, where bed geometry maps, resulted by repeated multibeam echo sounder (MBES) surveys, served as input data. Cross-sectional distributions of bedload transport rates from the AMV based method were compared with the ones obtained from another non-intrusive technique (due to the lack of direct samplings), ISSDOTv2, developed by the US Army Corps of Engineers. The good agreement between the results from the two different methods is encouraging and suggests further field tests in varying hydro-morphological situations.
NASA Astrophysics Data System (ADS)
Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki
We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.
Noise estimation for hyperspectral imagery using spectral unmixing and synthesis
NASA Astrophysics Data System (ADS)
Demirkesen, C.; Leloglu, Ugur M.
2014-10-01
Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the noise itself. A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising algorithm is based on estimation of the end-members and their fractional abundances using non-negative least squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR after manual selection of number of end-members and the image is reconstructed using the estimated endmembers and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy image. In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its single parameter, namely the number of end-members.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
NASA Technical Reports Server (NTRS)
Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.
1984-01-01
Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.
Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung; Curlander, John C.
1991-01-01
Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.
Ronald E. McRoberts; Steen Magnussen; Erkki O. Tomppo; Gherardo Chirici
2011-01-01
Nearest neighbors techniques have been shown to be useful for estimating forest attributes, particularly when used with forest inventory and satellite image data. Published reports of positive results have been truly international in scope. However, for these techniques to be more useful, they must be able to contribute to scientific inference which, for sample-based...
A new slit lamp-based technique for anterior chamber angle estimation.
Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A
2014-06-01
To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p < 0.001) and 0.79 (p < 0.001) for each slit lamp. Sensitivity values of 100 and 87.5% and specificity values of 75 and 81.2%, depending on the slit lamp used, were obtained for the SLACE technique as compared with gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.
Software risk estimation and management techniques at JPL
NASA Technical Reports Server (NTRS)
Hihn, J.; Lum, K.
2002-01-01
In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
ERIC Educational Resources Information Center
He, Wu
2014-01-01
Currently, a work breakdown structure (WBS) approach is used as the most common cost estimation approach for online course production projects. To improve the practice of cost estimation, this paper proposes a novel framework to estimate the cost for online course production projects using a case-based reasoning (CBR) technique and a WBS. A…
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.
1995-01-01
The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Arterial Mechanical Motion Estimation Based on a Semi-Rigid Body Deformation Approach
Guzman, Pablo; Hamarneh, Ghassan; Ros, Rafael; Ros, Eduardo
2014-01-01
Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques. PMID:24871987
NASA Astrophysics Data System (ADS)
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
Principal axes estimation using the vibration modes of physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2008-06-01
This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.
NASA Astrophysics Data System (ADS)
Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.
2015-09-01
Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.
Shear wave elastography using Wigner-Ville distribution: a simulated multilayer media study.
Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan
2016-08-01
Shear Wave Elastography (SWE) is a quantitative ultrasound-based imaging modality for distinguishing normal and abnormal tissue types by estimating the local viscoelastic properties of the tissue. These properties have been estimated in many studies by propagating ultrasound shear wave within the tissue and estimating parameters such as speed of wave. Vast majority of the proposed techniques are based on the cross-correlation of consecutive ultrasound images. In this study, we propose a new method of wave detection based on time-frequency (TF) analysis of the ultrasound signal. The proposed method is a modified version of the Wigner-Ville Distribution (WVD) technique. The TF components of the wave are detected in a propagating ultrasound wave within a simulated multilayer tissue and the local properties are estimated based on the detected waves. Image processing techniques such as Alternative Sequential Filters (ASF) and Circular Hough Transform (CHT) have been utilized to improve the estimation of TF components. This method has been applied to a simulated data from Wave3000™ software (CyberLogic Inc., New York, NY). This data simulates the propagation of an acoustic radiation force impulse within a two-layer tissue with slightly different viscoelastic properties between the layers. By analyzing the local TF components of the wave, we estimate the longitudinal and shear elasticities and viscosities of the media. This work shows that our proposed method is capable of distinguishing between different layers of a tissue.
Levecke, Bruno; De Wilde, Nathalie; Vandenhoute, Els; Vercruysse, Jozef
2009-01-01
Background Soil-transmitted helminths, such as Trichuris trichiura, are of major concern in public health. Current efforts to control these helminth infections involve periodic mass treatment in endemic areas. Since these large-scale interventions are likely to intensify, monitoring the drug efficacy will become indispensible. However, studies comparing detection techniques based on sensitivity, fecal egg counts (FEC), feasibility for mass diagnosis and drug efficacy estimates are scarce. Methodology/Principal Findings In the present study, the ether-based concentration, the Parasep Solvent Free (SF), the McMaster and the FLOTAC techniques were compared based on both validity and feasibility for the detection of Trichuris eggs in 100 fecal samples of nonhuman primates. In addition, the drug efficacy estimates of quantitative techniques was examined using a statistical simulation. Trichuris eggs were found in 47% of the samples. FLOTAC was the most sensitive technique (100%), followed by the Parasep SF (83.0% [95% confidence interval (CI): 82.4–83.6%]) and the ether-based concentration technique (76.6% [95% CI: 75.8–77.3%]). McMaster was the least sensitive (61.7% [95% CI: 60.7–62.6%]) and failed to detect low FEC. The quantitative comparison revealed a positive correlation between the four techniques (Rs = 0.85–0.93; p<0.0001). However, the ether-based concentration technique and the Parasep SF detected significantly fewer eggs than both the McMaster and the FLOTAC (p<0.0083). Overall, the McMaster was the most feasible technique (3.9 min/sample for preparing, reading and cleaning of the apparatus), followed by the ether-based concentration technique (7.7 min/sample) and the FLOTAC (9.8 min/sample). Parasep SF was the least feasible (17.7 min/sample). The simulation revealed that the sensitivity is less important for monitoring drug efficacy and that both FLOTAC and McMaster were reliable estimators. Conclusions/Significance The results of this study demonstrated that McMaster is a promising technique when making use of FEC to monitor drug efficacy in Trichuris. PMID:19172171
Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti
2013-01-01
Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst
2016-05-01
Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Development and application of the maximum entropy method and other spectral estimation techniques
NASA Astrophysics Data System (ADS)
King, W. R.
1980-09-01
This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
Classification of the Regional Ionospheric Disturbance Based on Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Terzi, Merve Begum; Arikan, Orhan; Karatay, Secil; Arikan, Feza; Gulyaeva, Tamara
2016-08-01
In this study, Total Electron Content (TEC) estimated from GPS receivers is used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. For the automated classification of regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. Performance of developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing developed classification technique to Global Ionospheric Map (GIM) TEC data, which is provided by the NASA Jet Propulsion Laboratory (JPL), it is shown that SVM can be a suitable learning method to detect anomalies in TEC variations.
Liu, Xingguo; Niu, Jianwei; Ran, Linghua; Liu, Taijie
2017-08-01
This study aimed to develop estimation formulae for the total human body volume (BV) of adult males using anthropometric measurements based on a three-dimensional (3D) scanning technique. Noninvasive and reliable methods to predict the total BV from anthropometric measurements based on a 3D scan technique were addressed in detail. A regression analysis of BV based on four key measurements was conducted for approximately 160 adult male subjects. Eight total models of human BV show that the predicted results fitted by the regression models were highly correlated with the actual BV (p < 0.001). Two metrics, the mean value of the absolute difference between the actual and predicted BV (V error ) and the mean value of the ratio between V error and actual BV (RV error ), were calculated. The linear model based on human weight was recommended as the most optimal due to its simplicity and high efficiency. The proposed estimation formulae are valuable for estimating total body volume in circumstances in which traditional underwater weighing or air displacement plethysmography is not applicable or accessible. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
NASA Astrophysics Data System (ADS)
Nelson, D. J.
2007-09-01
In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.
Entropy-based adaptive attitude estimation
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.
2018-03-01
Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Malinowski, Arkadiusz; Takeuchi, Takuya; Chen, Shang; Suzuki, Toshiya; Ishikawa, Kenji; Sekine, Makoto; Hori, Masaru; Lukasiak, Lidia; Jakubowski, Andrzej
2013-07-01
This paper describes a new, fast, and case-independent technique for sticking coefficient (SC) estimation based on pallet for plasma evaluation (PAPE) structure and numerical analysis. Our approach does not require complicated structure, apparatus, or time-consuming measurements but offers high reliability of data and high flexibility. Thermal analysis is also possible. This technique has been successfully applied to estimation of very low value of SC of hydrogen radicals on chemically amplified ArF 193 nm photoresist (the main goal of this study). Upper bound of our technique has been determined by investigation of SC of fluorine radical on polysilicon (in elevated temperature). Sources of estimation error and ways of its reduction have been also discussed. Results of this study give an insight into the process kinetics, and not only they are helpful in better process understanding but additionally they may serve as parameters in a phenomenological model development for predictive modelling of etching for ultimate CMOS topography simulation.
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I
2012-12-21
A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.
Retention of denture bases fabricated by three different processing techniques – An in vivo study
Chalapathi Kumar, V. H.; Surapaneni, Hemchand; Ravikiran, V.; Chandra, B. Sarat; Balusu, Srilatha; Reddy, V. Naveen
2016-01-01
Aim: Distortion due to Polymerization shrinkage compromises the retention. To evaluate the amount of retention of denture bases fabricated by conventional, anchorized, and injection molding polymerization techniques. Materials and Methods: Ten completely edentulous patients were selected, impressions were made, and master cast obtained was duplicated to fabricate denture bases by three polymerization techniques. Loop was attached to the finished denture bases to estimate the force required to dislodge them by retention apparatus. Readings were subjected to nonparametric Friedman two-way analysis of variance followed by Bonferroni correction methods and Wilcoxon matched-pairs signed-ranks test. Results: Denture bases fabricated by injection molding (3740 g), anchorized techniques (2913 g) recorded greater retention values than conventional technique (2468 g). Significant difference was seen between these techniques. Conclusions: Denture bases obtained by injection molding polymerization technique exhibited maximum retention, followed by anchorized technique, and least retention was seen in conventional molding technique. PMID:27382542
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Improved Pulse Wave Velocity Estimation Using an Arterial Tube-Load Model
Gao, Mingwu; Zhang, Guanqun; Olivier, N. Bari; Mukkamala, Ramakrishna
2015-01-01
Pulse wave velocity (PWV) is the most important index of arterial stiffness. It is conventionally estimated by non-invasively measuring central and peripheral blood pressure (BP) and/or velocity (BV) waveforms and then detecting the foot-to-foot time delay between the waveforms wherein wave reflection is presumed absent. We developed techniques for improved estimation of PWV from the same waveforms. The techniques effectively estimate PWV from the entire waveforms, rather than just their feet, by mathematically eliminating the reflected wave via an arterial tube-load model. In this way, the techniques may be more robust to artifact while revealing the true PWV in absence of wave reflection. We applied the techniques to estimate aortic PWV from simultaneously and sequentially measured central and peripheral BP waveforms and simultaneously measured central BV and peripheral BP waveforms from 17 anesthetized animals during diverse interventions that perturbed BP widely. Since BP is the major acute determinant of aortic PWV, especially under anesthesia wherein vasomotor tone changes are minimal, we evaluated the techniques in terms of the ability of their PWV estimates to track the acute BP changes in each subject. Overall, the PWV estimates of the techniques tracked the BP changes better than those of the conventional technique (e.g., diastolic BP root-mean-squared-errors of 3.4 vs. 5.2 mmHg for the simultaneous BP waveforms and 7.0 vs. 12.2 mmHg for the BV and BP waveforms (p < 0.02)). With further testing, the arterial tube-load model-based PWV estimation techniques may afford more accurate arterial stiffness monitoring in hypertensive and other patients. PMID:24263016
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
High-resolution bottom-loss estimation using the ambient-noise vertical coherence function.
Muzi, Lanfranco; Siderius, Martin; Quijano, Jorge E; Dosso, Stan E
2015-01-01
The seabed reflection loss (shortly "bottom loss") is an important quantity for predicting transmission loss in the ocean. A recent passive technique for estimating the bottom loss as a function of frequency and grazing angle exploits marine ambient noise (originating at the surface from breaking waves, wind, and rain) as an acoustic source. Conventional beamforming of the noise field at a vertical line array of hydrophones is a fundamental step in this technique, and the beamformer resolution in grazing angle affects the quality of the estimated bottom loss. Implementation of this technique with short arrays can be hindered by their inherently poor angular resolution. This paper presents a derivation of the bottom reflection coefficient from the ambient-noise spatial coherence function, and a technique based on this derivation for obtaining higher angular resolution bottom-loss estimates. The technique, which exploits the (approximate) spatial stationarity of the ambient-noise spatial coherence function, is demonstrated on both simulated and experimental data.
Ariyama, Kaoru; Kadokura, Masashi; Suzuki, Tadanao
2008-01-01
Techniques to determine the geographic origin of foods have been developed for various agricultural and fishery products, and they have used various principles. Some of these techniques are already in use for checking the authenticity of the labeling. Many are based on multielement analysis and chemometrics. We have developed such a technique to determine the geographic origin of onions (Allium cepa L.). This technique, which determines whether an onion is from outside Japan, is designed for onions labeled as having a geographic origin of Hokkaido, Hyogo, or Saga, the main onion production areas in Japan. However, estimations of discrimination errors for this technique have not been fully conducted; they have been limited to those for discrimination models and do not include analytical errors. Interlaboratory studies were conducted to estimate the analytical errors of the technique. Four collaborators each determined 11 elements (Na, Mg, P, Mn, Zn, Rb, Sr, Mo, Cd, Cs, and Ba) in 4 test materials of fresh and dried onions. Discrimination errors in this technique were estimated by summing (1) individual differences within lots, (2) variations between lots from the same production area, and (3) analytical errors. The discrimination errors for onions from Hokkaido, Hyogo, and Saga were estimated to be 2.3, 9.5, and 8.0%, respectively. Those for onions from abroad in determinations targeting Hokkaido, Hyogo, and Saga were estimated to be 28.2, 21.6, and 21.9%, respectively.
Lorenz, David L.; Sanocki, Chris A.; Kocian, Matthew J.
2010-01-01
Knowledge of the peak flow of floods of a given recurrence interval is essential for regulation and planning of water resources and for design of bridges, culverts, and dams along Minnesota's rivers and streams. Statistical techniques are needed to estimate peak flow at ungaged sites because long-term streamflow records are available at relatively few places. Because of the need to have up-to-date peak-flow frequency information in order to estimate peak flows at ungaged sites, the U.S. Geological Survey (USGS) conducted a peak-flow frequency study in cooperation with the Minnesota Department of Transportation and the Minnesota Pollution Control Agency. Estimates of peak-flow magnitudes for 1.5-, 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are presented for 330 streamflow-gaging stations in Minnesota and adjacent areas in Iowa and South Dakota based on data through water year 2005. The peak-flow frequency information was subsequently used in regression analyses to develop equations relating peak flows for selected recurrence intervals to various basin and climatic characteristics. Two statistically derived techniques-regional regression equation and region of influence regression-can be used to estimate peak flow on ungaged streams smaller than 3,000 square miles in Minnesota. Regional regression equations were developed for selected recurrence intervals in each of six regions in Minnesota: A (northwestern), B (north central and east central), C (northeastern), D (west central and south central), E (southwestern), and F (southeastern). The regression equations can be used to estimate peak flows at ungaged sites. The region of influence regression technique dynamically selects streamflow-gaging stations with characteristics similar to a site of interest. Thus, the region of influence regression technique allows use of a potentially unique set of gaging stations for estimating peak flow at each site of interest. Two methods of selecting streamflow-gaging stations, similarity and proximity, can be used for the region of influence regression technique. The regional regression equation technique is the preferred technique as an estimate of peak flow in all six regions for ungaged sites. The region of influence regression technique is not appropriate for regions C, E, and F because the interrelations of some characteristics of those regions do not agree with the interrelations throughout the rest of the State. Both the similarity and proximity methods for the region of influence technique can be used in the other regions (A, B, and D) to provide additional estimates of peak flow. The peak-flow-frequency estimates and basin characteristics for selected streamflow-gaging stations and regional peak-flow regression equations are included in this report.
Comparison of ionospheric plasma drifts obtained by different techniques
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Arikan, Feza; Arikan, Orhan; Toker, Cenk; Mosna, Zbysek; Gok, Gokhan; Rejfek, Lubos; Ari, Gizem
2016-07-01
Ionospheric observatory in Pruhonice (Czech Republic, 50N, 14.9E) provides regular ionospheric sounding using Digisonde DPS-4D. The paper is focused on F-region vertical drift data. Vertical component of the drift velocity vector can be estimated by several methods. Digisonde DPS-4D allows sounding in drift mode with direct output represented by drift velocity vector. The Digisonde located in Pruhonice provides direct drift measurement routinely once per 15 minutes. However, also other different techniques can be found in the literature, for example the indirect estimation based on the temporal evolution of measured ionospheric characteristics is often used for calculation of the vertical drift component. The vertical velocity is thus estimated according to the change of characteristics scaled from the classical quarter-hour ionograms. In present paper direct drift measurement is compared with technique based on measuring of the virtual height at fixed frequency from the F-layer trace on ionogram, technique based on variation of h`F and hmF. This comparison shows possibility of using different methods for calculating vertical drift velocity and their relationship to the direct measurement used by Digisonde. This study is supported by the Joint TUBITAK 114E092 and AS CR 14/001 projects.
Oelze, Michael L.; Mamou, Jonathan
2017-01-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W
2017-12-01
The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to discern minors from adults was better for males than for females, with specificities of 96% and 73% respectively. Age estimations based on the proposed staging method for third molars on MRI showed comparable reproducibility and performance as the established methods based on radiographs.
A new data assimilation engine for physics-based thermospheric density models
NASA Astrophysics Data System (ADS)
Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.
2017-12-01
The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.
Quantum-classical boundary for precision optical phase estimation
NASA Astrophysics Data System (ADS)
Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo
2017-12-01
Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.
Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J
2009-11-01
Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.
Precision phase estimation based on weak-value amplification
NASA Astrophysics Data System (ADS)
Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei
2017-02-01
In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Near-Field Source Localization by Using Focusing Technique
NASA Astrophysics Data System (ADS)
He, Hongyang; Wang, Yide; Saillard, Joseph
2008-12-01
We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.
Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...
Application of the Combination Approach for Estimating Evapotranspiration in Puerto Rico
NASA Technical Reports Server (NTRS)
Harmsen, Eric; Luvall, Jeffrey; Gonzalez, Jorge
2005-01-01
The ability to estimate short-term fluxes of water vapor from the land surface is important for validating latent heat flux estimates from high resolution remote sensing techniques. A new, relatively inexpensive method is presented for estimating t h e ground-based values of the surface latent heat flux or evapotranspiration.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Estimates of air emissions from asphalt storage tanks and truck loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trumbore, D.C.
1999-12-31
Title V of the 1990 Clean Air Act requires the accurate estimation of emissions from all US manufacturing processes, and places the burden of proof for that estimate on the process owner. This paper is published as a tool to assist in the estimation of air emission from hot asphalt storage tanks and asphalt truck loading operations. Data are presented on asphalt vapor pressure, vapor molecular weight, and the emission split between volatile organic compounds and particulate emissions that can be used with AP-42 calculation techniques to estimate air emissions from asphalt storage tanks and truck loading operations. Since currentmore » AP-42 techniques are not valid in asphalt tanks with active fume removal, a different technique for estimation of air emissions in those tanks, based on direct measurement of vapor space combustible gas content, is proposed. Likewise, since AP-42 does not address carbon monoxide or hydrogen sulfide emissions that are known to be present in asphalt operations, this paper proposes techniques for estimation of those emissions. Finally, data are presented on the effectiveness of fiber bed filters in reducing air emissions in asphalt operations.« less
Deflection-Based Aircraft Structural Loads Estimation with Comparison to Flight
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Lokos, William A.
2005-01-01
Traditional techniques in structural load measurement entail the correlation of a known load with strain-gage output from the individual components of a structure or machine. The use of strain gages has proved successful and is considered the standard approach for load measurement. However, remotely measuring aerodynamic loads using deflection measurement systems to determine aeroelastic deformation as a substitute to strain gages may yield lower testing costs while improving aircraft performance through reduced instrumentation weight. With a reliable strain and structural deformation measurement system this technique was examined. The objective of this study was to explore the utility of a deflection-based load estimation, using the active aeroelastic wing F/A-18 aircraft. Calibration data from ground tests performed on the aircraft were used to derive left wing-root and wing-fold bending-moment and torque load equations based on strain gages, however, for this study, point deflections were used to derive deflection-based load equations. Comparisons between the strain-gage and deflection-based methods are presented. Flight data from the phase-1 active aeroelastic wing flight program were used to validate the deflection-based load estimation method. Flight validation revealed a strong bending-moment correlation and slightly weaker torque correlation. Development of current techniques, and future studies are discussed.
Deflection-Based Structural Loads Estimation From the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Lokos, William A.
2005-01-01
Traditional techniques in structural load measurement entail the correlation of a known load with strain-gage output from the individual components of a structure or machine. The use of strain gages has proved successful and is considered the standard approach for load measurement. However, remotely measuring aerodynamic loads using deflection measurement systems to determine aeroelastic deformation as a substitute to strain gages may yield lower testing costs while improving aircraft performance through reduced instrumentation weight. This technique was examined using a reliable strain and structural deformation measurement system. The objective of this study was to explore the utility of a deflection-based load estimation, using the active aeroelastic wing F/A-18 aircraft. Calibration data from ground tests performed on the aircraft were used to derive left wing-root and wing-fold bending-moment and torque load equations based on strain gages, however, for this study, point deflections were used to derive deflection-based load equations. Comparisons between the strain-gage and deflection-based methods are presented. Flight data from the phase-1 active aeroelastic wing flight program were used to validate the deflection-based load estimation method. Flight validation revealed a strong bending-moment correlation and slightly weaker torque correlation. Development of current techniques, and future studies are discussed.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
NASA Technical Reports Server (NTRS)
Scaife, Bradley James
1999-01-01
In any satellite communication, the Doppler shift associated with the satellite's position and velocity must be calculated in order to determine the carrier frequency. If the satellite state vector is unknown then some estimate must be formed of the Doppler-shifted carrier frequency. One elementary technique is to examine the signal spectrum and base the estimate on the dominant spectral component. If, however, the carrier is spread (as in most satellite communications) this technique may fail unless the chip rate-to-data rate ratio (processing gain) associated with the carrier is small. In this case, there may be enough spectral energy to allow peak detection against a noise background. In this thesis, we present a method to estimate the frequency (without knowledge of the Doppler shift) of a spread-spectrum carrier assuming a small processing gain and binary-phase shift keying (BPSK) modulation. Our method relies on an averaged discrete Fourier transform along with peak detection on spectral match filtered data. We provide theory and simulation results indicating the accuracy of this method. In addition, we will describe an all-digital hardware design based around a Motorola DSP56303 and high-speed A/D which implements this technique in real-time. The hardware design is to be used in NMSU's implementation of NASA's demand assignment, multiple access (DAMA) service.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Vibration-based angular speed estimation for multi-stage wind turbine gearboxes
NASA Astrophysics Data System (ADS)
Peeters, Cédric; Leclère, Quentin; Antoni, Jérôme; Guillaume, Patrick; Helsen, Jan
2017-05-01
Most processing tools based on frequency analysis of vibration signals are only applicable for stationary speed regimes. Speed variation causes the spectral content to smear, which encumbers most conventional fault detection techniques. To solve the problem of non-stationary speed conditions, the instantaneous angular speed (IAS) is estimated. Wind turbine gearboxes however are typically multi-stage gearboxes, consisting of multiple shafts, rotating at different speeds. Fitting a sensor (e.g. a tachometer) to every single stage is not always feasible. As such there is a need to estimate the IAS of every single shaft based on the vibration signals measured by the accelerometers. This paper investigates the performance of the multi-order probabilistic approach for IAS estimation on experimental case studies of wind turbines. This method takes into account the meshing orders of the gears present in the system and has the advantage that a priori it is not necessary to associate harmonics with a certain periodic mechanical event, which increases the robustness of the method. It is found that the MOPA has the potential to easily outperform standard band-pass filtering techniques for speed estimation. More knowledge of the gearbox kinematics is beneficial for the MOPA performance, but even with very little knowledge about the meshing orders, the MOPA still performs sufficiently well to compete with the standard speed estimation techniques. This observation is proven on two different data sets, both originating from vibration measurements on the gearbox housing of a wind turbine.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1995-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
Depth-estimation-enabled compound eyes
NASA Astrophysics Data System (ADS)
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Estimation of shoreline position and change using airborne topographic lidar data
Stockdon, H.F.; Sallenger, A.H.; List, J.H.; Holman, R.A.
2002-01-01
A method has been developed for estimating shoreline position from airborne scanning laser data. This technique allows rapid estimation of objective, GPS-based shoreline positions over hundreds of kilometers of coast, essential for the assessment of large-scale coastal behavior. Shoreline position, defined as the cross-shore position of a vertical shoreline datum, is found by fitting a function to cross-shore profiles of laser altimetry data located in a vertical range around the datum and then evaluating the function at the specified datum. Error bars on horizontal position are directly calculated as the 95% confidence interval on the mean value based on the Student's t distribution of the errors of the regression. The technique was tested using lidar data collected with NASA's Airborne Topographic Mapper (ATM) in September 1997 on the Outer Banks of North Carolina. Estimated lidar-based shoreline position was compared to shoreline position as measured by a ground-based GPS vehicle survey system. The two methods agreed closely with a root mean square difference of 2.9 m. The mean 95% confidence interval for shoreline position was ?? 1.4 m. The technique has been applied to a study of shoreline change on Assateague Island, Maryland/Virginia, where three ATM data sets were used to assess the statistics of large-scale shoreline change caused by a major 'northeaster' winter storm. The accuracy of both the lidar system and the technique described provides measures of shoreline position and change that are ideal for studying storm-scale variability over large spatial scales.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
NASA Technical Reports Server (NTRS)
Vanlunteren, A.; Stassen, H. G.
1973-01-01
Parameter estimation techniques are discussed with emphasis on unbiased estimates in the presence of noise. A distinction between open and closed loop systems is made. A method is given based on the application of external forcing functions consisting of a sun of sinusoids; this method is thus based on the estimation of Fourier coefficients and is applicable for models with poles and zeros in open and closed loop systems.
Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements
NASA Astrophysics Data System (ADS)
Jakub, Thomas D.
Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.
Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D
2012-01-01
Objective The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Methods Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Results Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. Conclusion The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach. PMID:22719214
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Detailed gravity anomalies from GEOS-3 satellite altimetry data
NASA Technical Reports Server (NTRS)
Gopalapillai, G. S.; Mourad, A. G.
1978-01-01
A technique for deriving mean gravity anomalies from dense altimetry data was developed. A combination of both deterministic and statistical techniques was used. The basic mathematical model was based on the Stokes' equation which describes the analytical relationship between mean gravity anomalies and geoid undulations at a point; this undulation is a linear function of the altimetry data at that point. The overdetermined problem resulting from the excessive altimetry data available was solved using Least-Squares principles. These principles enable the simultaneous estimation of the associated standard deviations reflecting the internal consistency based on the accuracy estimates provided for the altimetry data as well as for the terrestrial anomaly data. Several test computations were made of the anomalies and their accuracy estimates using GOES-3 data.
Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy; Siroux, Valérie
2013-09-01
Errors in address geocodes may affect estimates of the effects of air pollution on health. We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant's address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: -0.56, -6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: -0.14, -3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted -2.81 (95% CI: -0.26, -5.35) using NavTEQ, or 2.08 (95% CI -4.63, 0.47, p = 0.11) using Google Maps]. Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-03-15
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.
Efficient data assimilation algorithm for bathymetry application
NASA Astrophysics Data System (ADS)
Ghorbanidehno, H.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.
2017-12-01
Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing techniques. Data assimilation methods combine the remote sensing data and nearshore hydrodynamic models to estimate the unknown bathymetry and the corresponding uncertainties. In particular, several recent efforts have combined Kalman Filter-based techniques such as ensembled-based Kalman filters with indirect video-based observations to address the bathymetry inversion problem. However, these methods often suffer from ensemble collapse and uncertainty underestimation. Here, the Compressed State Kalman Filter (CSKF) method is used to estimate the bathymetry based on observed wave celerity. In order to demonstrate the accuracy and robustness of the CSKF method, we consider twin tests with synthetic observations of wave celerity, while the bathymetry profiles are chosen based on surveys taken by the U.S. Army Corps of Engineer Field Research Facility (FRF) in Duck, NC. The first test case is a bathymetry estimation problem for a spatially smooth and temporally constant bathymetry profile. The second test case is a bathymetry estimation problem for a temporally evolving bathymetry from a smooth to a non-smooth profile. For both problems, we compare the results of CSKF with those obtained by the local ensemble transform Kalman filter (LETKF), which is a popular ensemble-based Kalman filter method.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
Automated skin lesion segmentation with kernel density estimation
NASA Astrophysics Data System (ADS)
Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.
2017-07-01
Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.
Tropical forest biomass estimation from truncated stand tables.
A. J. R. Gillespie; S. Brown; A. E. Lugo
1992-01-01
Total aboveground forest biomass may be estimated through a variety of techniques based on commercial inventory stand and stock tables. Stand and stock tables from tropical countries commonly omit trees bellow a certain commercial limit.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters
NASA Technical Reports Server (NTRS)
Beattie, J. R.; Garvin, H. L.
1982-01-01
The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.
Rath, Hemamalini; Rath, Rachna; Mahapatra, Sandeep; Debta, Tribikram
2017-01-01
The age of an individual can be assessed by a plethora of widely available tooth-based techniques, among which radiological methods prevail. The Demirjian's technique of age assessment based on tooth development stages has been extensively investigated in different populations of the world. The present study is to assess the applicability of Demirjian's modified 8-teeth technique in age estimation of population of East India (Odisha), utilizing Acharya's Indian-specific cubic functions. One hundred and six pretreatment orthodontic radiographs of patients in an age group of 7-23 years with representation from both genders were assessed for eight left mandibular teeth and scored as per the Demirjian's 9-stage criteria for teeth development stages. Age was calculated on the basis of Acharya's Indian formula. Statistical analysis was performed to compare the estimated and actual age. All data were analyzed using SPSS 20.0 (SPSS Inc., Chicago, Illinois, USA) and MS Excel Package. The results revealed that the mean absolute error (MAE) in age estimation of the entire sample was 1.3 years with 50% of the cases having an error rate within ± 1 year. The MAE in males and females (7-16 years) was 1.8 and 1.5, respectively. Likewise, the MAE in males and females (16.1-23 years) was 1.1 and 1.3, respectively. The low error rate in estimating age justifies the application of this modified technique and Acharya's Indian formulas in the present East Indian population.
NASA Technical Reports Server (NTRS)
Grissom, D. S.; Schneider, W. C.
1971-01-01
The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.
Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation
NASA Astrophysics Data System (ADS)
Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.
A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.
An Information-Based Machine Learning Approach to Elasticity Imaging
Hoerig, Cameron; Ghaboussi, Jamshid; Insana, Michael. F.
2016-01-01
An information-based technique is described for applications in mechanical-property imaging of soft biological media under quasi-static loads. We adapted the Autoprogressive method that was originally developed for civil engineering applications for this purpose. The Autoprogressive method is a computational technique that combines knowledge of object shape and a sparse distribution of force and displacement measurements with finite-element analyses and artificial neural networks to estimate a complete set of stress and strain vectors. Elasticity imaging parameters are then computed from estimated stresses and strains. We introduce the technique using ultrasonic pulse-echo measurements in simple gelatin imaging phantoms having linear-elastic properties so that conventional finite-element modeling can be used to validate results. The Autoprogressive algorithm does not require any assumptions about the material properties and can, in principle, be used to image media with arbitrary properties. We show that by selecting a few well-chosen force-displacement measurements that are appropriately applied during training and establish convergence, we can estimate all nontrivial stress and strain vectors throughout an object and accurately estimate an elastic modulus at high spatial resolution. This new method of modeling the mechanical properties of tissue-like materials introduces a unique method of solving the inverse problem and is the first technique for imaging stress without assuming the underlying constitutive model. PMID:27858175
Multiple input electrode gap controller
Hysinger, C.L.; Beaman, J.J.; Melgaard, D.K.; Williamson, R.L.
1999-07-27
A method and apparatus for controlling vacuum arc remelting (VAR) furnaces by estimation of electrode gap based on a plurality of secondary estimates derived from furnace outputs. The estimation is preferably performed by Kalman filter. Adaptive gain techniques may be employed, as well as detection of process anomalies such as glows. 17 figs.
Multiple input electrode gap controller
Hysinger, Christopher L.; Beaman, Joseph J.; Melgaard, David K.; Williamson, Rodney L.
1999-01-01
A method and apparatus for controlling vacuum arc remelting (VAR) furnaces by estimation of electrode gap based on a plurality of secondary estimates derived from furnace outputs. The estimation is preferably performed by Kalman filter. Adaptive gain techniques may be employed, as well as detection of process anomalies such as glows.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-12
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-01
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis. PMID:28787839
Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S
2013-01-01
Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.
NASA Astrophysics Data System (ADS)
Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing
2017-07-01
Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons of cereal crops. The new REP extraction technique provides us a new insight for understanding the spectral changes in the red edge region in response to chlorophyll variation from leaf to canopy levels.
Technique for estimation of streamflow statistics in mineral areas of interest in Afghanistan
Olson, Scott A.; Mack, Thomas J.
2011-01-01
A technique for estimating streamflow statistics at ungaged stream sites in areas of mineral interest in Afghanistan using drainage-area-ratio relations of historical streamflow data was developed and is documented in this report. The technique can be used to estimate the following streamflow statistics at ungaged sites: (1) 7-day low flow with a 10-year recurrence interval, (2) 7-day low flow with a 2-year recurrence interval, (3) daily mean streamflow exceeded 90 percent of the time, (4) daily mean streamflow exceeded 80 percent of the time, (5) mean monthly streamflow for each month of the year, (6) mean annual streamflow, and (7) minimum monthly streamflow for each month of the year. Because they are based on limited historical data, the estimates of streamflow statistics at ungaged sites are considered preliminary.
2014-01-01
Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895
Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H
2012-01-01
MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod
2010-04-01
For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.
NASA Astrophysics Data System (ADS)
Piñero, G.; Vergara, L.; Desantes, J. M.; Broatch, A.
2000-11-01
The knowledge of the particle velocity fluctuations associated with acoustic pressure oscillation in the exhaust system of internal combustion engines may represent a powerful aid in the design of such systems, from the point of view of both engine performance improvement and exhaust noise abatement. However, usual velocity measurement techniques, even if applicable, are not well suited to the aggressive environment existing in exhaust systems. In this paper, a method to obtain a suitable estimate of velocity fluctuations is proposed, which is based on the application of spatial filtering (beamforming) techniques to instantaneous pressure measurements. Making use of simulated pressure-time histories, several algorithms have been checked by comparison between the simulated and the estimated velocity fluctuations. Then, problems related to the experimental procedure and associated with the proposed methodology are addressed, making application to measurements made in a real exhaust system. The results indicate that, if proper care is taken when performing the measurements, the application of beamforming techniques gives a reasonable estimate of the velocity fluctuations.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
New Image-Based Techniques for Prostate Biopsy and Treatment
2012-04-01
C-arm fluoroscopy, MICCAI 2011, Toronto, Canada, 2011. 4) Poster Presentation: Prostate Cancer Probability Estimation Based on DCE- DTI Features...and P. Kozlowski, “Prostate Cancer Probability Estimation Based on DCE- DTI Features and Support Vector Machine Classification,” Annual Meeting of... DTI ), which characterize the de-phasing of the MR signal caused by molecular diffusion. Prostate cancer causes a pathological change in the tissue
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
What Do the Numbers Say? Clarifying Our Interpretation of Carbon Use Efficiency Data from Soil.
NASA Astrophysics Data System (ADS)
Geyer, K.; Dijkstra, P.; Sinsabaugh, R. L.; Frey, S. D.
2017-12-01
Carbon use efficiency (CUE) is the proportion of carbon resources that a microorganism commits towards cellular growth and thus affects the dynamics of soil organic matter pools. While numerous approaches exist for estimating CUE, no attempts have been made to simultaneously compare methods and reconcile their inherent biases. Such work is necessary to partition the observed variation in CUE estimates (commonly between 0.3 - 0.7) as biological or technical in origin. Here we review our results from experimental work aimed at comparing both traditional and emerging CUE techniques. Soil from the Harvard Forest Long Term Ecological Research site in Massachusetts, USA, was amended with 13C-glucose and 18O-water in laboratory mesocosms and monitored for changing rates of soil dissolved organic carbon (DOC) uptake, respiration (R), microbial biomass (MBC) production, DNA synthesis, and heat (Q) flux over 72 hrs. Three CUE estimates were generated from this data: 1) Δ13MBC/(Δ13MBC+ 13R), 2) Δ18DNA/(Δ18DNA + R), 3) Q/R. CUE was also measured via two indirect techniques: metabolic flux modeling and stoichiometric modeling. Results indicate that the 18O technique is able to discern gross growth of soil microbes while the 13C technique indicates net growth. As a result, 18O-based CUE remains unchanged ( 0.45) for the incubation duration at low amendment rates (0.0 and 0.05 mg glucose-C/g) while 13C-based CUE declines with time (0.75 to 0.5). The 13C technique likely overestimates CUE because the numerator (Δ13MBC) integrates any label (whether or not destined for growth) residing within the cell. High amendment rates (2.0 mg glucose-C/g) cause dramatic declines in 18O- and 13C-based CUE for the first 24 hr of incubation, but neither modeling approach was able to detect these dynamics. In summary, our results suggest that 13C-based estimates of CUE are best interpreted as net changes in the residence of labeled substrate within MBC pools while 18O-based estimates directly capture gross growth dynamics. Metabolic flux modeling and stoichiometric modeling appear to be suited for conditions of steady-state MBC only, where they approximate the CUE magnitude of the 13C- and 18O-based approaches, respectively.
Jo, Javier A.; Fang, Qiyin; Marcu, Laura
2007-01-01
We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.
2017-12-01
Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Jackson, R. D.; Moran, M.S.; Gay, L.W.; Raymond, L.H.
1987-01-01
Airborne measurements of reflected solar and emitted thermal radiation were combined with ground-based measurements of incoming solar radiation, air temperature, windspeed, and vapor pressure to calculate instantaneous evaporation (LE) rates using a form of the Penman equation. Estimates of evaporation over cotton, wheat, and alfalfa fields were obtained on 5 days during a one-year period. A Bowen ratio apparatus, employed simultaneously, provided ground-based measurements of evaporation. Comparison of the airborne and ground techniques showed good agreement, with the greatest difference being about 12% for the instantaneous values. Estimates of daily (24 h) evaporation were made from the instantaneous data. On three of the five days, the difference between the two techniques was less than 8%, with the greatest difference being 25%. The results demonstrate that airborne remote sensing techniques can be used to obtain spatially distributed values of evaporation over agricultural fields. ?? 1987 Springer-Verlag.
Combining heuristic and statistical techniques in landslide hazard assessments
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni
2014-05-01
As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.
Estimating numbers of greater prairie-chickens using mark-resight techniques
Clifton, A.M.; Krementz, D.G.
2006-01-01
Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.
Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna
2017-07-01
pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Fidan, Barış; Umay, Ilknur
2015-01-01
Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441
Analytical model for real time, noninvasive estimation of blood glucose level.
Adhyapak, Anoop; Sidley, Matthew; Venkataraman, Jayanti
2014-01-01
The paper presents an analytical model to estimate blood glucose level from measurements made non-invasively and in real time by an antenna strapped to a patient's wrist. Some promising success has been shown by the RIT ETA Lab research group that an antenna's resonant frequency can track, in real time, changes in glucose concentration. Based on an in-vitro study of blood samples of diabetic patients, the paper presents a modified Cole-Cole model that incorporates a factor to represent the change in glucose level. A calibration technique using the input impedance technique is discussed and the results show a good estimation as compared to the glucose meter readings. An alternate calibration methodology has been developed that is based on the shift in the antenna resonant frequency using an equivalent circuit model containing a shunt capacitor to represent the shift in resonant frequency with changing glucose levels. Work under progress is the optimization of the technique with a larger sample of patients.
AMT-200S Motor Glider Parameter and Performance Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2011-01-01
Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.
Two phase sampling for wheat acreage estimation. [large area crop inventory experiment
NASA Technical Reports Server (NTRS)
Thomas, R. W.; Hay, C. M.
1977-01-01
A two phase LANDSAT-based sample allocation and wheat proportion estimation method was developed. This technique employs manual, LANDSAT full frame-based wheat or cultivated land proportion estimates from a large number of segments comprising a first sample phase to optimally allocate a smaller phase two sample of computer or manually processed segments. Application to the Kansas Southwest CRD for 1974 produced a wheat acreage estimate for that CRD within 2.42 percent of the USDA SRS-based estimate using a lower CRD inventory budget than for a simulated reference LACIE system. Factor of 2 or greater cost or precision improvements relative to the reference system were obtained.
Estimating Crop Growth Stage by Combining Meteorological and Remote Sensing Based Techniques
NASA Astrophysics Data System (ADS)
Champagne, C.; Alavi-Shoushtari, N.; Davidson, A. M.; Chipanshi, A.; Zhang, Y.; Shang, J.
2016-12-01
Estimations of seeding, harvest and phenological growth stage of crops are important sources of information for monitoring crop progress and crop yield forecasting. Growth stage has been traditionally estimated at the regional level through surveys, which rely on field staff to collect the information. Automated techniques to estimate growth stage have included agrometeorological approaches that use temperature and day length information to estimate accumulated heat and photoperiod, with thresholds used to determine when these stages are most likely. These approaches however, are crop and hybrid dependent, and can give widely varying results depending on the method used, particularly if the seeding date is unknown. Methods to estimate growth stage from remote sensing have progressed greatly in the past decade, with time series information from the Normalized Difference Vegetation Index (NDVI) the most common approach. Time series NDVI provide information on growth stage through a variety of techniques, including fitting functions to a series of measured NDVI values or smoothing these values and using thresholds to detect changes in slope that are indicative of rapidly increasing or decreasing `greeness' in the vegetation cover. The key limitations of these techniques for agriculture are frequent cloud cover in optical data that lead to errors in estimating local features in the time series function, and the incongruity between changes in greenness and traditional agricultural growth stages. There is great potential to combine both meteorological approaches and remote sensing to overcome the limitations of each technique. This research will examine the accuracy of both meteorological and remote sensing approaches over several agricultural sites in Canada, and look at the potential to integrate these techniques to provide improved estimates of crop growth stage for common field crops.
Isotopic Techniques for Assessment of Groundwater Discharge to the Coastal Ocean
2003-09-30
estimates of the pore water Rn activity. The red line (based on an average groundwater concentration of 170 dpm/L) is considered our best estimate and...Isotopic Techniques For Assessment of Groundwater Discharge to the Coastal Ocean William C. Burnett Department of Oceanography Florida State...evaluating the influence of submarine groundwater discharge (SGD) into the ocean. Our long-term goal is to develop geochemical tools (e.g., radon and
Verification of Agricultural Methane Emission Inventories
NASA Astrophysics Data System (ADS)
Desjardins, R. L.; Pattey, E.; Worth, D. E.; VanderZaag, A.; Mauder, M.; Srinivasan, R.; Worthy, D.; Sweeney, C.; Metzger, S.
2017-12-01
It is estimated that agriculture contributes more than 40% of anthropogenic methane (CH4) emissions in North America. However, these estimates, which are either based on the Intergovernmental Panel on Climate Change (IPCC) methodology or inverse modeling techniques, are poorly validated due to the challenges of separating interspersed CH4 sources within agroecosystems. A flux aircraft, instrumented with a fast-response Picarro CH4 analyzer for the eddy covariance (EC) technique and a sampling system for the relaxed eddy accumulation technique (REA), was flown at an altitude of about 150 m along several 20-km transects over an agricultural region in Eastern Canada. For all flight days, the top-down CH4 flux density measurements were compared to the footprint adjusted bottom-up estimates based on an IPCC Tier II methodology. Information on the animal population, land use type and atmospheric and surface variables were available for each transect. Top-down and bottom-up estimates of CH4 emissions were found to be poorly correlated, and wetlands were the most frequent confounding source of CH4; however, there were other sources such as waste treatment plants and biodigesters. Spatially resolved wavelet covariance estimates of CH4 emissions helped identify the contribution of wetlands to the overall CH4 flux, and the dependence of these emissions on temperature. When wetland contribution in the flux footprint was minimized, top-down and bottom-up estimates agreed to within measurement error. This research demonstrates that although existing aircraft-based technology can be used to verify regional ( 100 km2) agricultural CH4 emissions, it remains challenging due to diverse sources of CH4 present in many regions. The use of wavelet covariance to generate spatially-resolved flux estimates was found to be the best way to separate interspersed sources of CH4.
Multidimensional Test Assembly Based on Lagrangian Relaxation Techniques. Research Report 98-08.
ERIC Educational Resources Information Center
Veldkamp, Bernard P.
In this paper, a mathematical programming approach is presented for the assembly of ability tests measuring multiple traits. The values of the variance functions of the estimators of the traits are minimized, while test specifications are met. The approach is based on Lagrangian relaxation techniques and provides good results for the two…
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
On the Methods for Estimating the Corneoscleral Limbus.
Jesus, Danilo A; Iskander, D Robert
2017-08-01
The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.
Hemsley, Victoria S; Smyth, Timothy J; Martin, Adrian P; Frajka-Williams, Eleanor; Thompson, Andrew F; Damerell, Gillian; Painter, Stuart C
2015-10-06
An autonomous underwater vehicle (Seaglider) has been used to estimate marine primary production (PP) using a combination of irradiance and fluorescence vertical profiles. This method provides estimates for depth-resolved and temporally evolving PP on fine spatial scales in the absence of ship-based calibrations. We describe techniques to correct for known issues associated with long autonomous deployments such as sensor calibration drift and fluorescence quenching. Comparisons were made between the Seaglider, stable isotope ((13)C), and satellite estimates of PP. The Seaglider-based PP estimates were comparable to both satellite estimates and stable isotope measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambie, F.W.; Yee, S.N.
The purpose of this and a previous project was to examine the feasibility of estimating intermediate grade uranium (0.01 to 0.05% U/sub 3/O/sub 8/) on the basis of existing, sparsely drilled holes. All data are from the Powder River Basin in Wyoming. DOE makes preliminary estimates of endowment by calculating an Average Area of Influence (AAI) based on densely drilled areas, multiplying that by the thickness of the mineralization and then dividing by a tonnage factor. The resulting tonnage of ore is then multiplied by the average grade of the interval to obtain the estimate of U/sub 3/O/sub 8/ tonnage.more » Total endowment is the sum of these values over all mineralized intervals in all wells in the area. In regions where wells are densely drilled and approximately regularly spaced this technique approaches the classical polygonal estimation technique used to estimate ore reserves and should be fairly reliable. The method is conservative because: (1) in sparsely drilled regions a large fraction of the area is not considered to contribute to endowment; (2) there is a bias created by the different distributions of point grades and mining block grades. A conservative approach may be justified for purposes of ore reserve estimation, where large investments may hinge on local forecasts. But for estimates of endowment over areas as large as 1/sup 0/ by 2/sup 0/ quadrangles, or the nation as a whole, errors in local predictions are not critical as long as they tend to cancel and a less conservative estimation approach may be justified.One candidate, developed for this study and described is called the contoured thickness technique. A comparison of estimates based on the contoured thickness approach with DOE calculations for five areas of Wyoming roll-fronts in the Powder River Basin is presented. The sensitivity of the technique to well density is examined and the question of predicting intermediate grade endowment from data on higher grades is discussed.« less
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
Location estimation in wireless sensor networks using spring-relaxation technique.
Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M
2010-01-01
Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.
NASA Astrophysics Data System (ADS)
Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.
2018-01-01
A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.
Oelze, Michael L; Mamou, Jonathan
2016-02-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
Estimating monthly temperature using point based interpolation techniques
NASA Astrophysics Data System (ADS)
Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi
2013-04-01
This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.
A performance model for GPUs with caches
Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...
2014-06-24
To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less
Center of pressure based segment inertial parameters validation
Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane
2017-01-01
By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.
1991-01-01
Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less
A quantitative investigation of the fracture pump-in/flowback test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plahn, S.V.; Nolte, K.G.; Thompson, L.G.
1997-02-01
Fracture-closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures (BHP`s) during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test, where strong indications of fracture closure are rarely seen. Various techniques are used to extract closure pressure from the flowback-pressure response. Unfortunately, these techniques give different estimates for closure pressure, and their theoretical bases are not well established. The authors present results that place the PIFB test on a firmer foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. On the basis of their simulation results, they propose interpretation techniques that give better estimates of closure pressure than existing techniques.« less
Wavelet Analyses of Oil Prices, USD Variations and Impact on Logistics
NASA Astrophysics Data System (ADS)
Melek, M.; Tokgozlu, A.; Aslan, Z.
2009-07-01
This paper is related with temporal variations of historical oil prices and Dollar and Euro in Turkey. Daily data based on OECD and Central Bank of Turkey records beginning from 1946 has been considered. 1D-continuous wavelets and wavelet packets analysis techniques have been applied on data. Wavelet techniques help to detect abrupt changing's, increasing and decreasing trends of data. Estimation of variables has been presented by using linear regression estimation techniques. The results of this study have been compared with the small and large scale effects. Transportation costs of track show a similar variation with fuel prices. The second part of the paper is related with estimation of imports, exports, costs, total number of vehicles and annual variations by considering temporal variation of oil prices and Dollar currency in Turkey. Wavelet techniques offer a user friendly methodology to interpret some local effects on increasing trend of imports and exports data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, J.J. Jr.; Hyder, Z.
The Nguyen and Pinder method is one of four techniques commonly used for analysis of response data from slug tests. Limited field research has raised questions about the reliability of the parameter estimates obtained with this method. A theoretical evaluation of this technique reveals that errors were made in the derivation of the analytical solution upon which the technique is based. Simulation and field examples show that the errors result in parameter estimates that can differ from actual values by orders of magnitude. These findings indicate that the Nguyen and Pinder method should no longer be a tool in themore » repertoire of the field hydrogeologist. If data from a slug test performed in a partially penetrating well in a confined aquifer need to be analyzed, recent work has shown that the Hvorslev method is the best alternative among the commonly used techniques.« less
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Varsani, A.; Genestreti, K. J.; Baumjohann, W.; Liu, Y.-H.
2018-05-01
A remote sensing technique to infer the local reconnection electric field based on in situ multipoint spacecraft observation at the reconnection separatrix is proposed. In this technique, the increment of the reconnected magnetic flux is estimated by integrating the in-plane magnetic field during the sequential observation of the separatrix boundary by multipoint measurements. We tested this technique by applying it to virtual observations in a two-dimensional fully kinetic particle-in-cell simulation of magnetic reconnection without a guide field and confirmed that the estimated reconnection electric field indeed agrees well with the exact value computed at the X-line. We then applied this technique to an event observed by the Magnetospheric Multiscale mission when crossing an energetic plasma sheet boundary layer during an intense substorm. The estimated reconnection electric field for this event is nearly 1 order of magnitude higher than a typical value of magnetotail reconnection.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Extending the Distributed Lag Model framework to handle chemical mixtures.
Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris
2017-07-01
Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.
a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar
NASA Astrophysics Data System (ADS)
Dehnavi, S.; Maghsoudi, Y.
2015-12-01
Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.
Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš
2017-05-01
The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi
2014-03-01
We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797
Evanescent Field Based Photoacoustics: Optical Property Evaluation at Surfaces
Goldschmidt, Benjamin S.; Rudy, Anna M.; Nowak, Charissa A.; Tsay, Yowting; Whiteside, Paul J. D.; Hunt, Heather K.
2016-01-01
Here, we present a protocol to estimate material and surface optical properties using the photoacoustic effect combined with total internal reflection. Optical property evaluation of thin films and the surfaces of bulk materials is an important step in understanding new optical material systems and their applications. The method presented can estimate thickness, refractive index, and use absorptive properties of materials for detection. This metrology system uses evanescent field-based photoacoustics (EFPA), a field of research based upon the interaction of an evanescent field with the photoacoustic effect. This interaction and its resulting family of techniques allow the technique to probe optical properties within a few hundred nanometers of the sample surface. This optical near field allows for the highly accurate estimation of material properties on the same scale as the field itself such as refractive index and film thickness. With the use of EFPA and its sub techniques such as total internal reflection photoacoustic spectroscopy (TIRPAS) and optical tunneling photoacoustic spectroscopy (OTPAS), it is possible to evaluate a material at the nanoscale in a consolidated instrument without the need for many instruments and experiments that may be cost prohibitive. PMID:27500652
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
NASA Astrophysics Data System (ADS)
Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.
2004-03-01
In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).
Michael D. Erickson; Curt C. Hassler; Chris B. LeDoux
1991-01-01
Continuous time and motion study techniques were used to develop productivity and cost estimators for the skidding component of ground-based logging systems, operating on steep terrain using preplanned skid roads. Comparisons of productivity and costs were analyzed for an overland random access skidding method, verses a skidding method utilizing a network of preplanned...
A program to form a multidisciplinary data base and analysis for dynamic systems
NASA Technical Reports Server (NTRS)
Taylor, L. W.; Suit, W. T.; Mayo, M. H.
1984-01-01
Diverse sets of experimental data and analysis programs have been assembled for the purpose of facilitating research in systems identification, parameter estimation and state estimation techniques. The data base analysis programs are organized to make it easy to compare alternative approaches. Additional data and alternative forms of analysis will be included as they become available.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-01-01
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system’s lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system’s ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection. PMID:29543733
A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.
Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye
2017-11-01
Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.
A Weighted Least Squares Approach To Robustify Least Squares Estimates.
ERIC Educational Resources Information Center
Lin, Chowhong; Davenport, Ernest C., Jr.
This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
Recent Improvements in Estimating Convective and Stratiform Rainfall in Amazonia
NASA Technical Reports Server (NTRS)
Negri, Andrew J.
1999-01-01
In this paper we present results from the application of a satellite infrared (IR) technique for estimating rainfall over northern South America. Our main objectives are to examine the diurnal variability of rainfall and to investigate the relative contributions from the convective and stratiform components. We apply the technique of Anagnostou et al (1999). In simple functional form, the estimated rain area A(sub rain) may be expressed as: A(sub rain) = f(A(sub mode),T(sub mode)), where T(sub mode) is the mode temperature of a cloud defined by 253 K, and A(sub mode) is the area encompassed by T(sub mode). The technique was trained by a regression between coincident microwave estimates from the Goddard Profiling (GPROF) algorithm (Kummerow et al, 1996) applied to SSM/I data and GOES IR (11 microns) observations. The apportionment of the rainfall into convective and stratiform components is based on the microwave technique described by Anagnostou and Kummerow (1997). The convective area from this technique was regressed against an IR structure parameter (the Convective Index) defined by Anagnostou et al (1999). Finally, rainrates are assigned to the Am.de proportional to (253-temperature), with different rates for the convective and stratiform
A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2001-01-01
In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.
Channel Estimation for Filter Bank Multicarrier Systems in Low SNR Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driggs, Jonathan; Sibbett, Taylor; Moradiy, Hussein
Channel estimation techniques are crucial for reliable communications. This paper is concerned with channel estimation in a filter bank multicarrier spread spectrum (FBMCSS) system. We explore two channel estimator options: (i) a method that makes use of a periodic preamble and mimics the channel estimation techniques that are widely used in OFDM-based systems; and (ii) a method that stays within the traditional realm of filter bank signal processing. For the case where the channel noise is white, both methods are analyzed in detail and their performance is compared against their respective Cramer-Rao Lower Bounds (CRLB). Advantages and disadvantages of themore » two methods under different channel conditions are given to provide insight to the reader as to when one will outperform the other.« less
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements
NASA Technical Reports Server (NTRS)
Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.
2012-01-01
An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of observations acquired in Turkey in May 2010 was processed using the linear estimates technique permitting, for what we believe to be the first time, temporal-height distributions of particle parameters.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.
Hui, Zhuo; Sankaranarayanan, Aswin C
2017-10-01
This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.
Disaster debris estimation using high-resolution polarimetric stereo-SAR
NASA Astrophysics Data System (ADS)
Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki
2016-10-01
This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.
A three-dimensional muscle activity imaging technique for assessing pelvic muscle function
NASA Astrophysics Data System (ADS)
Zhang, Yingchun; Wang, Dan; Timm, Gerald W.
2010-11-01
A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.
Comprehensive comparison of gap filling techniques for eddy covariance net carbon fluxes
NASA Astrophysics Data System (ADS)
Moffat, A. M.; Papale, D.; Reichstein, M.; Hollinger, D. Y.; Richardson, A. D.; Barr, A. G.; Beckstein, C.; Braswell, B. H.; Churkina, G.; Desai, A. R.; Falge, E.; Gove, J. H.; Heimann, M.; Hui, D.; Jarvis, A. J.; Kattge, J.; Noormets, A.; Stauch, V. J.
2007-12-01
Review of fifteen techniques for estimating missing values of net ecosystem CO2 exchange (NEE) in eddy covariance time series and evaluation of their performance for different artificial gap scenarios based on a set of ten benchmark datasets from six forested sites in Europe. The goal of gap filling is the reproduction of the NEE time series and hence this present work focuses on estimating missing NEE values, not on editing or the removal of suspect values in these time series due to systematic errors in the measurements (e.g. nighttime flux, advection). The gap filling was examined by generating fifty secondary datasets with artificial gaps (ranging in length from single half-hours to twelve consecutive days) for each benchmark dataset and evaluating the performance with a variety of statistical metrics. The performance of the gap filling varied among sites and depended on the level of aggregation (native half- hourly time step versus daily), long gaps were more difficult to fill than short gaps, and differences among the techniques were more pronounced during the day than at night. The non-linear regression techniques (NLRs), the look-up table (LUT), marginal distribution sampling (MDS), and the semi-parametric model (SPM) generally showed good overall performance. The artificial neural network based techniques (ANNs) were generally, if only slightly, superior to the other techniques. The simple interpolation technique of mean diurnal variation (MDV) showed a moderate but consistent performance. Several sophisticated techniques, the dual unscented Kalman filter (UKF), the multiple imputation method (MIM), the terrestrial biosphere model (BETHY), but also one of the ANNs and one of the NLRs showed high biases which resulted in a low reliability of the annual sums, indicating that additional development might be needed. An uncertainty analysis comparing the estimated random error in the ten benchmark datasets with the artificial gap residuals suggested that the techniques are already at or very close to the noise limit of the measurements. Based on the techniques and site data examined here, the effect of gap filling on the annual sums of NEE is modest, with most techniques falling within a range of ±25 g C m-2 y-1.
NASA Astrophysics Data System (ADS)
Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.; Rodríguez-Bouza, M.
2017-04-01
Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo's and Gopi's GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40°N, 0°E; mid latitude) and Accra in Ghana (5.50°N, -0.20°E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo's calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi's technique (a calibration technique based on code and carrier-phase measurements). At the same time, Gopi's calibration was also found more reliable in low latitude than Ciraolo's technique. In addition, the TEC derived from IGS GIM seems to be much reliable in middle-latitude than in low-latitude region.
Potocki, J K; Tharp, H S
1993-01-01
Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.
NASA Astrophysics Data System (ADS)
Ahmed, Mousumi
Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.
2002-01-01
The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James
2009-09-01
Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise.
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-03-09
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method.
Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.
2003-01-01
Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-01-01
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method. PMID:29522499
Non-invasive pressure difference estimation from PC-MRI using the work-energy equation
Donati, Fabrizio; Figueroa, C. Alberto; Smith, Nicolas P.; Lamata, Pablo; Nordsletten, David A.
2015-01-01
Pressure difference is an accepted clinical biomarker for cardiovascular disease conditions such as aortic coarctation. Currently, measurements of pressure differences in the clinic rely on invasive techniques (catheterization), prompting development of non-invasive estimates based on blood flow. In this work, we propose a non-invasive estimation procedure deriving pressure difference from the work-energy equation for a Newtonian fluid. Spatial and temporal convergence is demonstrated on in silico Phase Contrast Magnetic Resonance Image (PC-MRI) phantoms with steady and transient flow fields. The method is also tested on an image dataset generated in silico from a 3D patient-specific Computational Fluid Dynamics (CFD) simulation and finally evaluated on a cohort of 9 subjects. The performance is compared to existing approaches based on steady and unsteady Bernoulli formulations as well as the pressure Poisson equation. The new technique shows good accuracy, robustness to noise, and robustness to the image segmentation process, illustrating the potential of this approach for non-invasive pressure difference estimation. PMID:26409245
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.
2017-07-01
Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.
Ackerman, Joshua T.; Eagles-Smith, Collin A.
2010-01-01
Floating bird eggs to estimate their age is a widely used technique, but few studies have examined its accuracy throughout incubation. We assessed egg flotation for estimating hatch date, day of incubation, and the embryo's developmental age in eggs of the American Avocet (Recurvirostra americana), Black-necked Stilt (Himantopus mexicanus), and Forster's Tern (Sterna forsteri). Predicted hatch dates based on egg flotation during our first visit to a nest were highly correlated with actual hatch dates (r = 0.99) and accurate within 2.3 ± 1.7 (SD) days. Age estimates based on flotation were correlated with both day of incubation (r = 0.96) and the embryo's developmental age (r = 0.86) and accurate within 1.3 ± 1.6 days and 1.9 ± 1.6 days, respectively. However, the technique's accuracy varied substantially throughout incubation. Flotation overestimated the embryo's developmental age between 3 and 9 days, underestimated age between 12 and 21 days, and was most accurate between 0 and 3 days and 9 and 12 days. Age estimates based on egg flotation were generally accurate within 3 days until day 15 but later in incubation were biased progressively lower. Egg flotation was inaccurate and overestimated embryo age in abandoned nests (mean error: 7.5 ± 6.0 days). The embryo's developmental age and day of incubation were highly correlated (r = 0.94), differed by 2.1 ± 1.6 days, and resulted in similar assessments of the egg-flotation technique. Floating every egg in the clutch and refloating eggs at subsequent visits to a nest can refine age estimates.
Ackerman, Joshua T.; Eagles-Smith, Collin A.
2010-01-01
Floating bird eggs to estimate their age is a widely used technique, but few studies have examined its accuracy throughout incubation. We assessed egg flotation for estimating hatch date, day of incubation, and the embryo's developmental age in eggs of the American Avocet (Recurvirostra americana), Black-necked Stilt (Himantopus mexicanus), and Forster's Tern (Sterna forsteri). Predicted hatch dates based on egg flotation during our first visit to a nest were highly correlated with actual hatch dates (r = 0.99) and accurate within 2.3 ?? 1.7 (SD) days. Age estimates based on flotation were correlated with both day of incubation (r = 0.96) and the embryo's developmental age (r = 0.86) and accurate within 1.3 ?? 1.6 days and 1.9 ?? 1.6 days, respectively. However, the technique's accuracy varied substantially throughout incubation. Flotation overestimated the embryo's developmental age between 3 and 9 days, underestimated age between 12 and 21 days, and was most accurate between 0 and 3 days and 9 and 12 days. Age estimates based on egg flotation were generally accurate within 3 days until day 15 but later in incubation were biased progressively lower. Egg flotation was inaccurate and overestimated embryo age in abandoned nests (mean error: 7.5 ?? 6.0 days). The embryo's developmental age and day of incubation were highly correlated (r = 0.94), differed by 2.1 ?? 1.6 days, and resulted in similar assessments of the egg-flotation technique. Floating every egg in the clutch and refloating eggs at subsequent visits to a nest can refine age estimates. ?? The Cooper Ornithological Society 2010.
Thoracic respiratory motion estimation from MRI using a statistical model and a 2-D image navigator.
King, A P; Buerger, C; Tsoumpas, C; Marsden, P K; Schaeffter, T
2012-01-01
Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.
A Technique for Measuring Rotocraft Dynamic Stability in the 40 by 80 Foot Wind Tunnel
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Bohn, J. G.
1977-01-01
An on-line technique is described for the measurement of tilt rotor aircraft dynamic stability in the Ames 40- by 80-Foot Wind Tunnel. The technique is based on advanced system identification methodology and uses the instrumental variables approach. It is particulary applicable to real time estimation problems with limited amounts of noise-contaminated data. Several simulations are used to evaluate the algorithm. Estimated natural frequencies and damping ratios are compared with simulation values. The algorithm is also applied to wind tunnel data in an off-line mode. The results are used to develop preliminary guidelines for effective use of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Guoyan
2010-04-15
Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less
Estimation of Muscle Force Based on Neural Drive in a Hemispheric Stroke Survivor.
Dai, Chenyun; Zheng, Yang; Hu, Xiaogang
2018-01-01
Robotic assistant-based therapy holds great promise to improve the functional recovery of stroke survivors. Numerous neural-machine interface techniques have been used to decode the intended movement to control robotic systems for rehabilitation therapies. In this case report, we tested the feasibility of estimating finger extensor muscle forces of a stroke survivor, based on the decoded descending neural drive through population motoneuron discharge timings. Motoneuron discharge events were obtained by decomposing high-density surface electromyogram (sEMG) signals of the finger extensor muscle. The neural drive was extracted from the normalized frequency of the composite discharge of the motoneuron pool. The neural-drive-based estimation was also compared with the classic myoelectric-based estimation. Our results showed that the neural-drive-based approach can better predict the force output, quantified by lower estimation errors and higher correlations with the muscle force, compared with the myoelectric-based estimation. Our findings suggest that the neural-drive-based approach can potentially be used as a more robust interface signal for robotic therapies during the stroke rehabilitation.
Pattern recognition of satellite cloud imagery for improved weather prediction
NASA Technical Reports Server (NTRS)
Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.
1986-01-01
The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.
Accelerometer-based on-body sensor localization for health and medical monitoring applications
Vahdatpour, Alireza; Amini, Navid; Xu, Wenyao; Sarrafzadeh, Majid
2011-01-01
In this paper, we present a technique to recognize the position of sensors on the human body. Automatic on-body device localization ensures correctness and accuracy of measurements in health and medical monitoring systems. In addition, it provides opportunities to improve the performance and usability of ubiquitous devices. Our technique uses accelerometers to capture motion data to estimate the location of the device on the user’s body, using mixed supervised and unsupervised time series analysis methods. We have evaluated our technique with extensive experiments on 25 subjects. On average, our technique achieves 89% accuracy in estimating the location of devices on the body. In order to study the feasibility of classification of left limbs from right limbs (e.g., left arm vs. right arm), we performed analysis, based of which no meaningful classification was observed. Personalized ultraviolet monitoring and wireless transmission power control comprise two immediate applications of our on-body device localization approach. Such applications, along with their corresponding feasibility studies, are discussed. PMID:22347840
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Mourning dove hunting regulation strategy based on annual harvest statistics and banding data
Otis, D.L.
2006-01-01
Although managers should strive to base game bird harvest management strategies on mechanistic population models, monitoring programs required to build and continuously update these models may not be in place. Alternatively, If estimates of total harvest and harvest rates are available, then population estimates derived from these harvest data can serve as the basis for making hunting regulation decisions based on population growth rates derived from these estimates. I present a statistically rigorous approach for regulation decision-making using a hypothesis-testing framework and an assumed framework of 3 hunting regulation alternatives. I illustrate and evaluate the technique with historical data on the mid-continent mallard (Anas platyrhynchos) population. I evaluate the statistical properties of the hypothesis-testing framework using the best available data on mourning doves (Zenaida macroura). I use these results to discuss practical implementation of the technique as an interim harvest strategy for mourning doves until reliable mechanistic population models and associated monitoring programs are developed.
NASA Astrophysics Data System (ADS)
Smoczek, Jaroslaw
2015-10-01
The paper deals with the problem of reducing the residual vibration and limiting the transient oscillations of a flexible and underactuated system with respect to the variation of operating conditions. The comparative study of generalized predictive control (GPC) and fuzzy scheduling scheme developed based on the P1-TS fuzzy theory, local pole placement method and interval analysis of closed-loop system polynomial coefficients is addressed to the problem of flexible crane control. The two alternatives of a GPC-based method are proposed that enable to realize this technique either with or without a sensor of payload deflection. The first control technique is based on the recursive least squares (RLS) method applied to on-line estimate the parameters of a linear parameter varying (LPV) model of a crane dynamic system. The second GPC-based approach is based on a payload deflection feedback estimated using a pendulum model with the parameters interpolated using the P1-TS fuzzy system. Feasibility and applicability of the developed methods were confirmed through experimental verification performed on a laboratory scaled overhead crane.
Unsupervised, Robust Estimation-based Clustering for Multispectral Images
NASA Technical Reports Server (NTRS)
Netanyahu, Nathan S.
1997-01-01
To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.
Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.
von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar
2017-09-01
Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-01-01
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-03-12
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.
RESULTS OF COMPUTATIONS MADE FOR DASA-USNRDL FALLOUT SYMPOSIUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Read, R.; Wagner, L.; Moorehead, E.
1962-11-01
The regression techniques introduced by the Civil Defense Research Project for estimating fallout particle deposition coordinates, their standard ellipses, and isointensity contours have been applied to some of the homework problems assigned for the DASA-USNRDL Fallout Symposium. The results are reported and the estimates are contrasted with estimates based on the assumption that winds are invariant with time. (auth).
Estimation of discontinuous coefficients in parabolic systems: Applications to reservoir simulation
NASA Technical Reports Server (NTRS)
Lamm, P. D.
1984-01-01
Spline based techniques for estimating spatially varying parameters that appear in parabolic distributed systems (typical of those found in reservoir simulation problems) are presented. The problem of determining discontinuous coefficients, estimating both the functional shape and points of discontinuity for such parameters is discussed. Convergence results and a summary of numerical performance of the resulting algorithms are given.
Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G
2012-03-01
Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (P<0.001) from consumption of the guideline based diets (0.508±0.003 μg/kg/day) than from consumption of the Western diets (0.441±0.003 μg/kg/day). Guideline based diets contained less acrylamide contributed by French fries and potato chips than Western diets. Overall acrylamide intake, however, was higher in guideline based diets as a result of more frequent breakfast cereal intake. This is believed to be the first example of a risk assessment that combines probabilistic techniques with linear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Prakash, Satya; Mahesh, C.; Gairola, Rakesh M.
2011-12-01
Large-scale precipitation estimation is very important for climate science because precipitation is a major component of the earth's water and energy cycles. In the present study, the GOES precipitation index technique has been applied to the Kalpana-1 satellite infrared (IR) images of every three-hourly, i.e., of 0000, 0300, 0600,…., 2100 hours UTC, for rainfall estimation as a preparatory to the INSAT-3D. After the temperatures of all the pixels in a grid are known, they are distributed to generate a three-hourly 24-class histogram of brightness temperatures of IR (10.5-12.5 μm) images for a 1.0° × 1.0° latitude/longitude box. The daily, monthly, and seasonal rainfall have been estimated using these three-hourly rain estimates for the entire south-west monsoon period of 2009 in the present study. To investigate the potential of these rainfall estimates, the validation of monthly and seasonal rainfall estimates has been carried out using the Global Precipitation Climatology Project and Global Precipitation Climatology Centre data. The validation results show that the present technique works very well for the large-scale precipitation estimation qualitatively as well as quantitatively. The results also suggest that the simple IR-based estimation technique can be used to estimate rainfall for tropical areas at a larger temporal scale for climatological applications.
NASA Astrophysics Data System (ADS)
Brown, M. G. L.; He, T.; Liang, S.
2016-12-01
Satellite-derived estimates of incident photosynthetically active radiation (PAR) can be used to monitor global change, are required by most terrestrial ecosystem models, and can be used to estimate primary production according to the theory of light use efficiency. Compared with parametric approaches, non-parametric techniques that include an artificial neural network (ANN), support vector machine regression (SVM), an artificial bee colony (ABC), and a look-up table (LUT) do not require many ancillary data as inputs for the estimation of PAR from satellite data. In this study, a selection of machine learning methods to estimate PAR from MODIS top of atmosphere (TOA) radiances are compared to a LUT approach to determine which techniques might best handle the nonlinear relationship between TOA radiance and incident PAR. Evaluation of these methods (ANN, SVM, and LUT) is performed with ground measurements at seven SURFRAD sites. Due to the design of the ANN, it can handle the nonlinear relationship between TOA radiance and PAR better than linearly interpolating between the values in the LUT; however, training the ANN has to be carried out on an angular-bin basis, which results in a LUT of ANNs. The SVM model may be better for incorporating multiple viewing angles than the ANN; however, both techniques require a large amount of training data, which may introduce a regional bias based on where the most training and validation data are available. Based on the literature, the ABC is a promising alternative to an ANN, SVM regression and a LUT, but further development for this application is required before concrete conclusions can be drawn. For now, the LUT method outperforms the machine-learning techniques, but future work should be directed at developing and testing the ABC method. A simple, robust method to estimate direct and diffuse incident PAR, with minimal inputs and a priori knowledge, would be very useful for monitoring global change of primary production, particularly of pastures and rangeland, which have implications for livestock and food security. Future work will delve deeper into the utility of satellite-derived PAR estimation for monitoring primary production in pasture and rangelands.
NASA Astrophysics Data System (ADS)
Yamana, Teresa K.; Eltahir, Elfatih A. B.
2011-02-01
This paper describes the use of satellite-based estimates of rainfall to force the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS), a hydrology-based mechanistic model of malaria transmission. We first examined the temporal resolution of rainfall input required by HYDREMATS. Simulations conducted over Banizoumbou village in Niger showed that for reasonably accurate simulation of mosquito populations, the model requires rainfall data with at least 1 h resolution. We then investigated whether HYDREMATS could be effectively forced by satellite-based estimates of rainfall instead of ground-based observations. The Climate Prediction Center morphing technique (CMORPH) precipitation estimates distributed by the National Oceanic and Atmospheric Administration are available at a 30 min temporal resolution and 8 km spatial resolution. We compared mosquito populations simulated by HYDREMATS when the model is forced by adjusted CMORPH estimates and by ground observations. The results demonstrate that adjusted rainfall estimates from satellites can be used with a mechanistic model to accurately simulate the dynamics of mosquito populations.
Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L
2016-10-01
Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.
Estimating Starch Content in Roots of Deciduous Trees--A Visual Technique
Philip M. Wargo; Philip M. Wargo
1975-01-01
A visual technique for determining starch content in roots of forest trees, based onz iodine-staining of starch granules, was compared with a chemical method. Although the chemical method was more precise, roots could be sorted with the visual method into groups that are probably biologically important. The visual technique is simple and can be adapted for use in the...
Fast and robust estimation of ophthalmic wavefront aberrations
NASA Astrophysics Data System (ADS)
Dillon, Keith
2016-12-01
Rapidly rising levels of myopia, particularly in the developing world, have led to an increased need for inexpensive and automated approaches to optometry. A simple and robust technique is provided for estimating major ophthalmic aberrations using a gradient-based wavefront sensor. The approach is based on the use of numerical calculations to produce diverse combinations of phase components, followed by Fourier transforms to calculate the coefficients. The approach does not utilize phase unwrapping nor iterative solution of inverse problems. This makes the method very fast and tolerant to image artifacts, which do not need to be detected and masked or interpolated as is needed in other techniques. These features make it a promising algorithm on which to base low-cost devices for applications that may have limited access to expert maintenance and operation.
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
NASA Astrophysics Data System (ADS)
Sekhar, S. Chandra; Sreenivas, TV
2004-12-01
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Long term landslide monitoring with Ground Based SAR
NASA Astrophysics Data System (ADS)
Monserrat, Oriol; Crosetto, Michele; Luzi, Guido; Gili, Josep; Moya, Jose; Corominas, Jordi
2014-05-01
In the last decade, Ground-Based (GBSAR) has proven to be a reliable microwave Remote Sensing technique in several application fields, especially for unstable slopes monitoring. GBSAR can provide displacement measurements over few squared kilometres areas and with a very high spatial and temporal resolution. This work is focused on the use of GBSAR technique for long term landslide monitoring based on a particular data acquisition configuration, which is called discontinuous GBSAR (D-GBSAR). In the most commonly used GBSAR configuration, the radar is left installed in situ, acquiring data periodically, e.g. every few minutes. Deformations are estimated by processing sets of GBSAR images acquired during several weeks or months, without moving the system. By contrast, in the D-GBSAR the radar is installed and dismounted at each measurement campaign, revisiting a given site periodically. This configuration is useful to monitor slow deformation phenomena. In this work, two alternative ways for exploiting the D-GBSAR technique will be presented: the DInSAR technique and the Amplitude based Technique. The former is based on the exploitation of the phase component of the acquired SAR images and it allows providing millimetric precision on the deformation estimates. However, this technique presents several limitations like the reduction of measurable points with an increase in the period of observation, the ambiguous nature of the phase measurements, and the influence of the atmospheric phase component that can make it non applicable in some cases, specially when working in natural environments. The second approach, that is based on the use of the amplitude component of GB-SAR images combined with a image matching technique, will allow the estimation of the displacements over specific targets avoiding two of the limitations commented above: the phase unwrapping and atmosphere contribution but reducing the deformation measurement precision. Two successful examples of D-GBSAR landslide monitoring will be analysed and discussed: the first example is based on DInSAR and concerns to an urban landslide located in Barberà de la Conca (Catalonia, Spain). This village has experienced deformations since 2011 that have caused cracks in the church and several buildings. The results of a one year and half monitoring will be shown. The second example is based on the amplitude based approach and concerns to the active landslide of Vallcebre (Eastern Pyrenees, Spain). For this site, the results of eight campaigns during a period of 19 months were performed. During this period displacements of up to 80 cm were measured.
An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth
NASA Astrophysics Data System (ADS)
Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge
2017-01-01
A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections.
Zhang, You; Yin, Fang-Fang; Segars, W Paul; Ren, Lei
2013-12-01
To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy. Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and "ground-truth" onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)∕COMS (±S.D.) between lesions in prior images and "ground-truth" onboard images were 136.11% (±42.76%)∕15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD∕COMS between the lesion in estimated and "ground-truth" onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)∕4.9 mm (±3.0 mm), 96.07% (±31.48%)∕12.1 mm (±3.9 mm) and 11.45% (±9.37%)∕1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)∕4.9 mm (±3.0 mm), 75.98% (±27.21%)∕9.9 mm (±4.0 mm), and 5.22% (±2.12%)∕0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)∕3.2 mm (±2.2 mm), 24.57% (±18.18%)∕2.9 mm (±2.0 mm), and 10.48% (±9.50%)∕1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD∕COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased. The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, You; Yin, Fang-Fang; Ren, Lei
2013-12-15
Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes tomore » the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion in estimated and “ground-truth” onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)/4.9 mm (±3.0 mm), 96.07% (±31.48%)/12.1 mm (±3.9 mm) and 11.45% (±9.37%)/1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)/4.9 mm (±3.0 mm), 75.98% (±27.21%)/9.9 mm (±4.0 mm), and 5.22% (±2.12%)/0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)/3.2 mm (±2.2 mm), 24.57% (±18.18%)/2.9 mm (±2.0 mm), and 10.48% (±9.50%)/1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD/COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased.Conclusions: The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.« less
Estimating survival of radio-tagged birds
Bunck, C.M.; Pollock, K.H.; Lebreton, J.-D.; North, P.M.
1993-01-01
Parametric and nonparametric methods for estimating survival of radio-tagged birds are described. The general assumptions of these methods are reviewed. An estimate based on the assumption of constant survival throughout the period is emphasized in the overview of parametric methods. Two nonparametric methods, the Kaplan-Meier estimate of the survival funcrion and the log rank test, are explained in detail The link between these nonparametric methods and traditional capture-recapture models is discussed aloag with considerations in designing studies that use telemetry techniques to estimate survival.
Sen, Novonil; Kundu, Tribikram
2018-07-01
Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.
A New Method for Estimating Bacterial Abundances in Natural Samples using Sublimation
NASA Technical Reports Server (NTRS)
Glavin, Daniel P.; Cleaves, H. James; Schubert, Michael; Aubrey, Andrew; Bada, Jeffrey L.
2004-01-01
We have developed a new method based on the sublimation of adenine from Escherichia coli to estimate bacterial cell counts in natural samples. To demonstrate this technique, several types of natural samples including beach sand, seawater, deep-sea sediment, and two soil samples from the Atacama Desert were heated to a temperature of 500 C for several seconds under reduced pressure. The sublimate was collected on a cold finger and the amount of adenine released from the samples then determined by high performance liquid chromatography (HPLC) with UV absorbance detection. Based on the total amount of adenine recovered from DNA and RNA in these samples, we estimated bacterial cell counts ranging from approx. l0(exp 5) to l0(exp 9) E. coli cell equivalents per gram. For most of these samples, the sublimation based cell counts were in agreement with total bacterial counts obtained by traditional DAPI staining. The simplicity and robustness of the sublimation technique compared to the DAPI staining method makes this approach particularly attractive for use by spacecraft instrumentation. NASA is currently planning to send a lander to Mars in 2009 in order to assess whether or not organic compounds, especially those that might be associated with life, are present in Martian surface samples. Based on our analyses of the Atacama Desert soil samples, several million bacterial cells per gam of Martian soil should be detectable using this sublimation technique.
Unsupervised Feature Selection Based on the Morisita Index for Hyperspectral Images
NASA Astrophysics Data System (ADS)
Golay, Jean; Kanevski, Mikhail
2017-04-01
Hyperspectral sensors are capable of acquiring images with hundreds of narrow and contiguous spectral bands. Compared with traditional multispectral imagery, the use of hyperspectral images allows better performance in discriminating between land-cover classes, but it also results in large redundancy and high computational data processing. To alleviate such issues, unsupervised feature selection techniques for redundancy minimization can be implemented. Their goal is to select the smallest subset of features (or bands) in such a way that all the information content of a data set is preserved as much as possible. The present research deals with the application to hyperspectral images of a recently introduced technique of unsupervised feature selection: the Morisita-Based filter for Redundancy Minimization (MBRM). MBRM is based on the (multipoint) Morisita index of clustering and on the Morisita estimator of Intrinsic Dimension (ID). The fundamental idea of the technique is to retain only the bands which contribute to increasing the ID of an image. In this way, redundant bands are disregarded, since they have no impact on the ID. Besides, MBRM has several advantages over benchmark techniques: in addition to its ability to deal with large data sets, it can capture highly-nonlinear dependences and its implementation is straightforward in any programming environment. Experimental results on freely available hyperspectral images show the good effectiveness of MBRM in remote sensing data processing. Comparisons with benchmark techniques are carried out and random forests are used to assess the performance of MBRM in reducing the data dimensionality without loss of relevant information. References [1] C. Traina Jr., A.J.M. Traina, L. Wu, C. Faloutsos, Fast feature selection using fractal dimension, in: Proceedings of the XV Brazilian Symposium on Databases, SBBD, pp. 158-171, 2000. [2] J. Golay, M. Kanevski, A new estimator of intrinsic dimension based on the multipoint Morisita index, Pattern Recognition 48(12), pp. 4070-4081, 2015. [3] J. Golay, M. Kanevski, Unsupervised feature selection based on the Morisita estimator of intrinsic dimension, arXiv:1608.05581, 2016.
Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †
Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco
2016-01-01
Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394
Peeters, Frank; Atamanchuk, Dariia; Tengberg, Anders; Encinas-Fernández, Jorge; Hofmann, Hilmar
2016-01-01
Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC) utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5), although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer.
Peeters, Frank; Atamanchuk, Dariia; Tengberg, Anders; Encinas-Fernández, Jorge; Hofmann, Hilmar
2016-01-01
Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC) utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5), although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer. PMID:28002477
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Leurs, G; O'Connell, C P; Andreotti, S; Rutzen, M; Vonk Noordegraaf, H
2015-06-01
This study employed a non-lethal measurement tool, which combined an existing photo-identification technique with a surface, parallel laser photogrammetry technique, to accurately estimate the size of free-ranging white sharks Carcharodon carcharias. Findings confirmed the hypothesis that surface laser photogrammetry is more accurate than crew-based estimations that utilized a shark cage of known size as a reference tool. Furthermore, field implementation also revealed that the photographer's angle of reference and the shark's body curvature could greatly influence technique accuracy, exposing two limitations. The findings showed minor inconsistencies with previous studies that examined pre-caudal to total length ratios of dead specimens. This study suggests that surface laser photogrammetry can successfully increase length estimation accuracy and illustrates the potential utility of this technique for growth and stock assessments on free-ranging marine organisms, which will lead to an improvement of the adaptive management of the species. © 2015 The Fisheries Society of the British Isles.
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-01-01
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495
Simultaneous multiple non-crossing quantile regression estimation using kernel constraints
Liu, Yufeng; Wu, Yichao
2011-01-01
Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842
Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.
Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo
2015-12-01
In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
A quantitative investigation of the fracture pump-in/flowback test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plahn, S.V.; Nolte, K.G.; Miska, S.
1995-12-31
Fracture closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test where strong indications of fracture closure are rarely seen. Various techniques exist for extracting closure pressure from the flowback pressure response. Unfortunately, these procedures give different estimates for closure pressure and their theoretical bases are not well established. We present results that place the PIFB test on a more solid foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. Based on our simulation results, we propose an interpretation procedure which gives better estimates for closure pressure than existing techniques.« less
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Predicting the long tail of book sales: Unearthing the power-law exponent
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2010-06-01
The concept of the long tail has recently been used to explain the phenomenon in e-commerce where the total volume of sales of the items in the tail is comparable to that of the most popular items. In the case of online book sales, the proportion of tail sales has been estimated using regression techniques on the assumption that the data obeys a power-law distribution. Here we propose a different technique for estimation based on a generative model of book sales that results in an asymptotic power-law distribution of sales, but which does not suffer from the problems related to power-law regression techniques. We show that the proportion of tail sales predicted is very sensitive to the estimated power-law exponent. In particular, if we assume that the power-law exponent of the cumulative distribution is closer to 1.1 rather than to 1.2 (estimates published in 2003, calculated using regression by two groups of researchers), then our computations suggest that the tail sales of Amazon.com, rather than being 40% as estimated by Brynjolfsson, Hu and Smith in 2003, are actually closer to 20%, the proportion estimated by its CEO.
Winter bird population studies and project prairie birds for surveying grassland birds
Twedt, D.J.; Hamel, P.B.; Woodrey, M.S.
2008-01-01
We compared 2 survey methods for assessing winter bird communities in temperate grasslands: Winter Bird Population Study surveys are area-searches that have long been used in a variety of habitats whereas Project Prairie Bird surveys employ active-flushing techniques on strip-transects and are intended for use in grasslands. We used both methods to survey birds on 14 herbaceous reforested sites and 9 coastal pine savannas during winter and compared resultant estimates of species richness and relative abundance. These techniques did not yield similar estimates of avian populations. We found Winter Bird Population Studies consistently produced higher estimates of species richness, whereas Project Prairie Birds produced higher estimates of avian abundance for some species. When it is important to identify all species within the winter bird community, Winter Bird Population Studies should be the survey method of choice. If estimates of the abundance of relatively secretive grassland bird species are desired, the use of Project Prairie Birds protocols is warranted. However, we suggest that both survey techniques, as currently employed, are deficient and recommend distance- based survey methods that provide species-specific estimates of detection probabilities be incorporated into these survey methods.
Estimates of ground-water recharge based on streamflow-hydrograph methods: Pennsylvania
Risser, Dennis W.; Conger, Randall W.; Ulrich, James E.; Asmussen, Michael P.
2005-01-01
This study, completed by the U.S. Geological Survey (USGS) in cooperation with the Pennsylvania Department of Conservation and Natural Resources, Bureau of Topographic and Geologic Survey (T&GS), provides estimates of ground-water recharge for watersheds throughout Pennsylvania computed by use of two automated streamflow-hydrograph-analysis methods--PART and RORA. The PART computer program uses a hydrograph-separation technique to divide the streamflow hydrograph into components of direct runoff and base flow. Base flow can be a useful approximation of recharge if losses and interbasin transfers of ground water are minimal. The RORA computer program uses a recession-curve displacement technique to estimate ground-water recharge from each storm period indicated on the streamflow hydrograph. Recharge estimates were made using streamflow records collected during 1885-2001 from 197 active and inactive streamflow-gaging stations in Pennsylvania where streamflow is relatively unaffected by regulation. Estimates of mean-annual recharge in Pennsylvania computed by the use of PART ranged from 5.8 to 26.6 inches; estimates from RORA ranged from 7.7 to 29.3 inches. Estimates from the RORA program were about 2 inches greater than those derived from the PART program. Mean-monthly recharge was computed from the RORA program and was reported as a percentage of mean-annual recharge. On the basis of this analysis, the major ground-water recharge period in Pennsylvania typically is November through May; the greatest monthly recharge typically occurs in March.
NASA Astrophysics Data System (ADS)
Herman, Matthew R.; Nejadhashemi, A. Pouyan; Abouali, Mohammad; Hernandez-Suarez, Juan Sebastian; Daneshvar, Fariborz; Zhang, Zhen; Anderson, Martha C.; Sadeghi, Ali M.; Hain, Christopher R.; Sharifi, Amirreza
2018-01-01
As the global demands for the use of freshwater resources continues to rise, it has become increasingly important to insure the sustainability of this resources. This is accomplished through the use of management strategies that often utilize monitoring and the use of hydrological models. However, monitoring at large scales is not feasible and therefore model applications are becoming challenging, especially when spatially distributed datasets, such as evapotranspiration, are needed to understand the model performances. Due to these limitations, most of the hydrological models are only calibrated for data obtained from site/point observations, such as streamflow. Therefore, the main focus of this paper is to examine whether the incorporation of remotely sensed and spatially distributed datasets can improve the overall performance of the model. In this study, actual evapotranspiration (ETa) data was obtained from the two different sets of satellite based remote sensing data. One dataset estimates ETa based on the Simplified Surface Energy Balance (SSEBop) model while the other one estimates ETa based on the Atmosphere-Land Exchange Inverse (ALEXI) model. The hydrological model used in this study is the Soil and Water Assessment Tool (SWAT), which was calibrated against spatially distributed ETa and single point streamflow records for the Honeyoey Creek-Pine Creek Watershed, located in Michigan, USA. Two different techniques, multi-variable and genetic algorithm, were used to calibrate the SWAT model. Using the aforementioned datasets, the performance of the hydrological model in estimating ETa was improved using both calibration techniques by achieving Nash-Sutcliffe efficiency (NSE) values >0.5 (0.73-0.85), percent bias (PBIAS) values within ±25% (±21.73%), and root mean squared error - observations standard deviation ratio (RSR) values <0.7 (0.39-0.52). However, the genetic algorithm technique was more effective with the ETa calibration while significantly reducing the model performance for estimating the streamflow (NSE: 0.32-0.52, PBIAS: ±32.73%, and RSR: 0.63-0.82). Meanwhile, using the multi-variable technique, the model performance for estimating the streamflow was maintained with a high level of accuracy (NSE: 0.59-0.61, PBIAS: ±13.70%, and RSR: 0.63-0.64) while the evapotranspiration estimations were improved. Results from this assessment shows that incorporation of remotely sensed and spatially distributed data can improve the hydrological model performance if it is coupled with a right calibration technique.
Radiometrically accurate scene-based nonuniformity correction for array sensors.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2003-10-01
A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.
A model-based 3D template matching technique for pose acquisition of an uncooperative space object.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2015-03-16
This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.
360-degrees profilometry using strip-light projection coupled to Fourier phase-demodulation.
Servin, Manuel; Padilla, Moises; Garnica, Guillermo
2016-01-11
360 degrees (360°) digitalization of three dimensional (3D) solids using a projected light-strip is a well-established technique in academic and commercial profilometers. These profilometers project a light-strip over the digitizing solid while the solid is rotated a full revolution or 360-degrees. Then, a computer program typically extracts the centroid of this light-strip, and by triangulation one obtains the shape of the solid. Here instead of using intensity-based light-strip centroid estimation, we propose to use Fourier phase-demodulation for 360° solid digitalization. The advantage of Fourier demodulation over strip-centroid estimation is that the accuracy of phase-demodulation linearly-increases with the fringe density, while in strip-light the centroid-estimation errors are independent. Here we proposed first to construct a carrier-frequency fringe-pattern by closely adding the individual light-strip images recorded while the solid is being rotated. Next, this high-density fringe-pattern is phase-demodulated using the standard Fourier technique. To test the feasibility of this Fourier demodulation approach, we have digitized two solids with increasing topographic complexity: a Rubik's cube and a plastic model of a human-skull. According to our results, phase demodulation based on the Fourier technique is less noisy than triangulation based on centroid light-strip estimation. Moreover, Fourier demodulation also provides the amplitude of the analytic signal which is a valuable information for the visualization of surface details.
Anderson, Kyle; Segall, Paul
2013-01-01
Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The mathematics of the technique is presented in addition to the results of computer simulations conducted to demonstrate the prediction of the response of the system and the random forcing function initially introduced to excite the system.
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
A Model-Based Approach to Inventory Stratification
Ronald E. McRoberts
2006-01-01
Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities to counties to States and Provinces. Classified satellite imagery has been shown to be an effective source of ancillary data that, when used with stratified estimation techniques, contributes to increased precision with little corresponding increase...
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Reiter, M.E.; Andersen, D.E.
2008-01-01
Both egg flotation and egg candling have been used to estimate incubation day (often termed nest age) in nesting birds, but little is known about the relative accuracy of these two techniques. We used both egg flotation and egg candling to estimate incubation day for Canada Geese (Branta canadensis interior) nesting near Cape Churchill, Manitoba, from 2000 to 2007. We modeled variation in the difference between estimates of incubation day using each technique as a function of true incubation day, as well as, variation in error rates with each technique as a function of the true incubation day. We also evaluated the effect of error in the estimated incubation day on estimates of daily survival rate (DSR) and nest success using simulations. The mean difference between concurrent estimates of incubation day based on egg flotation minus egg candling at the same nest was 0.85 ?? 0.06 (SE) days. The positive difference in favor of egg flotation and the magnitude of the difference in estimates of incubation day did not vary as a function of true incubation day. Overall, both egg flotation and egg candling overestimated incubation day early in incubation and underestimated incubation day later in incubation. The average difference between true hatch date and estimated hatch date did not differ from zero (days) for egg flotation, but egg candling overestimated true hatch date by about 1 d (true - estimated; days). Our simulations suggested that error associated with estimating the incubation day of nests and subsequently exposure days using either egg candling or egg flotation would have minimal effects on estimates of DSR and nest success. Although egg flotation was slightly less biased, both methods provided comparable and accurate estimates of incubation day and subsequent estimates of hatch date and nest success throughout the entire incubation period. ?? 2008 Association of Field Ornithologists.
Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.
2016-01-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354
Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B
2016-04-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.
Kinematic Measurement of Knee Prosthesis from Single-Plane Projection Images
NASA Astrophysics Data System (ADS)
Hirokawa, Shunji; Ariyoshi, Shogo; Takahashi, Kenji; Maruyama, Koichi
In this paper, the measurement of 3D motion from 2D perspective projections of knee prosthesis is described. The technique reported by Banks and Hodge was further developed in this study. The estimation was performed in two steps. The first-step estimation was performed on the assumption of orthogonal projection. Then, the second-step estimation was subsequently carried out based upon the perspective projection to accomplish more accurate estimation. The simulation results have demonstrated that the technique archived sufficient accuracies of position/orientation estimation for prosthetic kinematics. Then we applied our algorithm to the CCD images, thereby examining the influences of various artifacts, possibly incorporated through an imaging process, on the estimation accuracies. We found that accuracies in the experiment were influenced mainly by the geometric discrepancies between the prosthesis component and computer generated model and by the spacial inconsistencies between the coordinate axes of the positioner and that of the computer model. However, we verified that our algorithm could achieve proper and consistent estimation even for the CCD images.
Unterseher, Martin; Schnittler, Martin
2009-05-01
Two cultivation-based isolation techniques - the incubation of leaf fragments (fragment plating) and dilution-to-extinction culturing on malt extract agar - were compared for recovery of foliar endophytic fungi from Fagus sylvatica near Greifswald, north-east Germany. Morphological-anatomical characters of vegetative and sporulating cultures and ITS sequences were used to assign morphotypes and taxonomic information to the isolates. Data analysis included species-accumulation curves, richness estimators, multivariate statistics and null model testing. Fragment plating and extinction culturing were significantly complementary with regard to species composition, because around two-thirds of the 35 fungal taxa were isolated with only one of the two cultivation techniques. The difference in outcomes highlights the need for caution in assessing fungal biodiversity based upon single isolation techniques. The efficiency of cultivation-based studies of fungal endophytes was significantly increased with the combination of the two isolation methods and estimations of species richness, when compared with a 20-years old reference study, which needed three times more isolates with fragment plating to attain the same species richness. Intensified testing and optimisation of extinction culturing in endophyte research is advocated.
A switched systems approach to image-based estimation
NASA Astrophysics Data System (ADS)
Parikh, Anup
With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.
Remote sensing-based estimation of annual soil respiration at two contrasting forest sites
Gu, Lianhong; Huang, Ni; Black, T. Andrew; ...
2015-11-23
Soil respiration (R s), an important component of the global carbon cycle, can be estimated using remotely sensed data, but the accuracy of this technique has not been thoroughly investigated. In this article, we proposed a methodology for the remote estimation of annual R s at two contrasting FLUXNET forest sites (a deciduous broadleaf forest and an evergreen needleleaf forest).
John R. Brooks
2007-01-01
A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...
NASA Astrophysics Data System (ADS)
Lee, T. R.; Wood, W. T.; Dale, J.
2017-12-01
Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.
NASA Technical Reports Server (NTRS)
Williams, B. F.
1976-01-01
Manufacturing techniques are evaluated using expenses based on experience and studying basic cost factors for each step to evaluate expenses from a first-principles point of view. A formal cost accounting procedure is developed which is used throughout the study for cost comparisons. The first test of this procedure is a comparison of its predicted costs for array module manufacturing with costs from a study which is based on experience factors. A manufacturing cost estimate for array modules of $10/W is based on present-day manufacturing techniques, expenses, and materials costs.
Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion
NASA Astrophysics Data System (ADS)
Hesser, T.; Farthing, M. W.; Brodie, K.
2016-02-01
The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.
Study of synthesis techniques for insensitive aircraft control systems
NASA Technical Reports Server (NTRS)
Harvey, C. A.; Pope, R. E.
1977-01-01
Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.
NASA Astrophysics Data System (ADS)
Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba
2018-10-01
This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.
Effect of non-Poisson samples on turbulence spectra from laser velocimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sree, D.; Kjelgaard, S.O.; Sellers, W.L. III
1994-12-01
Spectral estimations from LV data are typically based on the assumption of a Poisson sampling process. It is demonstrated here that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales. A non-Poisson sampling process can occur if there is nonhomogeneous distribution of particles in the flow. Based on the study of a simulated first-order spectrum, it has been shown that a non-Poisson sampling process causes the estimated spectrum to deviate from the true spectrum. Also, in this case the prefiltering techniques do not improve the spectral estimates at higher frequencies. 4 refs.
Nelson, Jr. Ralph M.
1982-01-01
Eighteen experimental fires were used to compare measured and calculated values for emission factors and fuel consumption to evaluate the carbon balance technique. The technique is based on a model for the emission factor of carbon dioxide, corrected for the production of other emissions, and which requires measurements of effluent concentrations and air volume in the...
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
Microbial Growth and Metabolism in Soil - Refining the Interpretation of Carbon Use Efficiency
NASA Astrophysics Data System (ADS)
Geyer, K.; Frey, S. D.
2016-12-01
Carbon use efficiency (CUE) describes a critical step in the terrestrial carbon cycle where microorganisms partition organic carbon (C) between stabilized organic forms and CO2. Application of this concept, however, begins with accurate measurements of CUE. Both traditional and developing approaches still depend on numerous assumptions that render them difficult to interpret and potentially incompatible with one another. Here we explore the soil processes inherent to traditional (e.g., substrate-based, biomass-based) and emerging (e.g., growth rate-based, calorimetry) CUE techniques in order to better understand the information they provide. Soil from the Harvard Forest Long Term Ecological Research (LTER) site in Massachusetts, USA, was amended with both 13C-glucose and 18O-water and monitored over 72 h for changes in dissolved organic carbon (DOC), respiration (R), microbial biomass (MB), DNA synthesis, and heat flux (Q). Four different CUE estimates were calculated: 1) (ΔDOC - R)/ΔDOC (substrate-based), 2) Δ13C-MB/(Δ13C-MB + R) (biomass-based), 3) Δ18O-DNA/(Δ18O-DNA + R) (growth rate-based), 4) Q/R (energy-based). Our results indicate that microbial growth (estimated by both 13C and 18O techniques) was delayed for 40 h after amendment even though DOC had declined to pre-amendment levels within 48 h. Respiration and heat flux also peaked after 40 h. Although these soils have a relatively high organic C content (5% C), respired CO2 was greater than 88% glucose-derived throughout the experiment. All estimates of microbial growth (Spearman's ρ >0.83, p<0.01) and efficiency (Spearman's ρ >0.65, p<0.05) were positively correlated, but strong differences in the magnitude of CUE suggest incomplete C accounting. This work increases the transparency of CUE techniques for researchers looking to choose the most appropriate measure for their scale of inquiry or to use CUE estimates in modeling applications.
Nagwani, Naresh Kumar; Deo, Shirish V
2014-01-01
Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.
Nagwani, Naresh Kumar; Deo, Shirish V.
2014-01-01
Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939
Estimation of under-reporting in epidemics using approximations.
Gamado, Kokouvi; Streftaris, George; Zachary, Stan
2017-06-01
Under-reporting in epidemics, when it is ignored, leads to under-estimation of the infection rate and therefore of the reproduction number. In the case of stochastic models with temporal data, a usual approach for dealing with such issues is to apply data augmentation techniques through Bayesian methodology. Departing from earlier literature approaches implemented using reversible jump Markov chain Monte Carlo (RJMCMC) techniques, we make use of approximations to obtain faster estimation with simple MCMC. Comparisons among the methods developed here, and with the RJMCMC approach, are carried out and highlight that approximation-based methodology offers useful alternative inference tools for large epidemics, with a good trade-off between time cost and accuracy.
NASA Astrophysics Data System (ADS)
Lindley, S. J.; Walsh, T.
There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.
Bank Regulation: Analysis of the Failure of Superior Bank, FSB, Hinsdale, Illinois
2002-02-07
statement of financial position based on the fair value . The best evidence of fair value is a quoted market price in an active market, but if there is no...market price, the value must be estimated. In estimating the fair value of retained interests, valuation techniques include estimating the present...about interest rates, default, prepayment, and volatility. In 1999, FASB explained that when estimating the fair value for 7FAS No. 140: Accounting for
NASA Technical Reports Server (NTRS)
Schkolnik, Gerard S.
1993-01-01
The application of an adaptive real-time measurement-based performance optimization technique is being explored for a future flight research program. The key technical challenge of the approach is parameter identification, which uses a perturbation-search technique to identify changes in performance caused by forced oscillations of the controls. The controls on the NASA F-15 highly integrated digital electronic control (HIDEC) aircraft were perturbed using inlet cowl rotation steps at various subsonic and supersonic flight conditions to determine the effect on aircraft performance. The feasibility of the perturbation-search technique for identifying integrated airframe-propulsion system performance effects was successfully shown through flight experiments and postflight data analysis. Aircraft response and control data were analyzed postflight to identify gradients and to determine the minimum drag point. Changes in longitudinal acceleration as small as 0.004 g were measured, and absolute resolution was estimated to be 0.002 g or approximately 50 lbf of drag. Two techniques for identifying performance gradients were compared: a least-squares estimation algorithm and a modified maximum likelihood estimator algorithm. A complementary filter algorithm was used with the least squares estimator.
NASA Technical Reports Server (NTRS)
Schkolnik, Gerald S.
1993-01-01
The application of an adaptive real-time measurement-based performance optimization technique is being explored for a future flight research program. The key technical challenge of the approach is parameter identification, which uses a perturbation-search technique to identify changes in performance caused by forced oscillations of the controls. The controls on the NASA F-15 highly integrated digital electronic control (HIDEC) aircraft were perturbed using inlet cowl rotation steps at various subsonic and supersonic flight conditions to determine the effect on aircraft performance. The feasibility of the perturbation-search technique for identifying integrated airframe-propulsion system performance effects was successfully shown through flight experiments and postflight data analysis. Aircraft response and control data were analyzed postflight to identify gradients and to determine the minimum drag point. Changes in longitudinal acceleration as small as 0.004 g were measured, and absolute resolution was estimated to be 0.002 g or approximately 50 lbf of drag. Two techniques for identifying performance gradients were compared: a least-squares estimation algorithm and a modified maximum likelihood estimator algorithm. A complementary filter algorithm was used with the least squares estimator.
Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion
NASA Technical Reports Server (NTRS)
Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.
Ionospheric Plasma Drift Analysis Technique Based On Ray Tracing
NASA Astrophysics Data System (ADS)
Ari, Gizem; Toker, Cenk
2016-07-01
Ionospheric drift measurements provide important information about the variability in the ionosphere, which can be used to quantify ionospheric disturbances caused by natural phenomena such as solar, geomagnetic, gravitational and seismic activities. One of the prominent ways for drift measurement depends on instrumentation based measurements, e.g. using an ionosonde. The drift estimation of an ionosonde depends on measuring the Doppler shift on the received signal, where the main cause of Doppler shift is the change in the length of the propagation path of the signal between the transmitter and the receiver. Unfortunately, ionosondes are expensive devices and their installation and maintenance require special care. Furthermore, the ionosonde network over the world or even Europe is not dense enough to obtain a global or continental drift map. In order to overcome the difficulties related to an ionosonde, we propose a technique to perform ionospheric drift estimation based on ray tracing. First, a two dimensional TEC map is constructed by using the IONOLAB-MAP tool which spatially interpolates the VTEC estimates obtained from the EUREF CORS network. Next, a three dimensional electron density profile is generated by inputting the TEC estimates to the IRI-2015 model. Eventually, a close-to-real situation electron density profile is obtained in which ray tracing can be performed. These profiles can be constructed periodically with a period of as low as 30 seconds. By processing two consequent snapshots together and calculating the propagation paths, we estimate the drift measurements over any coordinate of concern. We test our technique by comparing the results to the drift measurements taken at the DPS ionosonde at Pruhonice, Czech Republic. This study is supported by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
Validating precision estimates in horizontal wind measurements from a Doppler lidar
Newsom, Rob K.; Brewer, W. Alan; Wilczak, James M.; ...
2017-03-30
Results from a recent field campaign are used to assess the accuracy of wind speed and direction precision estimates produced by a Doppler lidar wind retrieval algorithm. The algorithm, which is based on the traditional velocity-azimuth-display (VAD) technique, estimates the wind speed and direction measurement precision using standard error propagation techniques, assuming the input data (i.e., radial velocities) to be contaminated by random, zero-mean, errors. For this study, the lidar was configured to execute an 8-beam plan-position-indicator (PPI) scan once every 12 min during the 6-week deployment period. Several wind retrieval trials were conducted using different schemes for estimating themore » precision in the radial velocity measurements. Here, the resulting wind speed and direction precision estimates were compared to differences in wind speed and direction between the VAD algorithm and sonic anemometer measurements taken on a nearby 300 m tower.« less
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming
2003-01-01
This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
Estimation of accuracy of earth-rotation parameters in different frequency bands
NASA Astrophysics Data System (ADS)
Vondrak, J.
1986-11-01
The accuracies of earth-rotation parameters as determined by five different observational techniques now available (i.e., optical astrometry /OA/, Doppler tracking of satellites /DTS/, satellite laser ranging /SLR/, very long-base interferometry /VLBI/ and lunar laser ranging /LLR/) are estimated. The differences between the individual techniques in all possible combinations, separated by appropriate filters into three frequency bands, were used to estimate the accuracies of the techniques for periods from 0 to 200 days, from 200 to 1000 days and longer than 1000 days. It is shown that for polar motion the most accurate results are obtained with VLBI anad SLR, especially in the short-period region; OA and DTS are less accurate, but with longer periods the differences in accuracy are less pronounced. The accuracies of UTI-UTC as determined by OA, VLBI and LLR are practically equivalent, the differences being less than 40 percent.
NASA Technical Reports Server (NTRS)
Hunter, H. E.; Amato, R. A.
1972-01-01
The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Evaluating uses of data mining techniques in propensity score estimation: a simulation study.
Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis
2008-06-01
In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.
Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy
2018-01-01
Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.
NASA Astrophysics Data System (ADS)
Stiefenhofer, Johann; Thurston, Malcolm L.; Bush, David E.
2018-04-01
Microdiamonds offer several advantages as a resource estimation tool, such as access to deeper parts of a deposit which may be beyond the reach of large diameter drilling (LDD) techniques, the recovery of the total diamond content in the kimberlite, and a cost benefit due to the cheaper treatment cost compared to large diameter samples. In this paper we take the first step towards local estimation by showing that micro-diamond samples can be treated as a regionalised variable suitable for use in geostatistical applications and we show examples of such output. Examples of microdiamond variograms are presented, the variance-support relationship for microdiamonds is demonstrated and consistency of the diamond size frequency distribution (SFD) is shown with the aid of real datasets. The focus therefore is on why local microdiamond estimation should be possible, not how to generate such estimates. Data from our case studies and examples demonstrate a positive correlation between micro- and macrodiamond sample grades as well as block estimates. This relationship can be demonstrated repeatedly across multiple mining operations. The smaller sample support size for microdiamond samples is a key difference between micro- and macrodiamond estimates and this aspect must be taken into account during the estimation process. We discuss three methods which can be used to validate or reconcile the estimates against macrodiamond data, either as estimates or in the form of production grades: (i) reconcilliation using production data, (ii) by comparing LDD-based grade estimates against microdiamond-based estimates and (iii) using simulation techniques.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
A simplified technique for delivering total body irradiation (TBI) with improved dose homogeneity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao Rui; Bernard, Damian; Turian, Julius
2012-04-15
Purpose: Total body irradiation (TBI) with megavoltage photon beams has been accepted as an important component of management for a number of hematologic malignancies, generally as part of bone marrow conditioning regimens. The purpose of this paper is to present and discuss the authors' TBI technique, which both simplifies the treatment process and improves the treatment quality. Methods: An AP/PA TBI treatment technique to produce uniform dose distributions using sequential collimator reductions during each fraction was implemented, and a sample calculation worksheet is presented. Using this methodology, the dosimetric characteristics of both 6 and 18 MV photon beams, including lungmore » dose under cerrobend blocks was investigated. A method of estimating midplane lung doses based on measured entrance and exit doses was proposed, and the estimated results were compared with measurements. Results: Whole body midplane dose uniformity of {+-}10% was achieved with no more than two collimator-based beam modulations. The proposed model predicted midplane lung doses 5% to 10% higher than the measured doses for 6 and 18 MV beams. The estimated total midplane doses were within {+-}5% of the prescribed midplane dose on average except for the lungs where the doses were 6% to 10% lower than the prescribed dose on average. Conclusions: The proposed TBI technique can achieve dose uniformity within {+-}10%. This technique is easy to implement and does not require complicated dosimetry and/or compensators.« less
NASA Astrophysics Data System (ADS)
Singh, Arun K.; Auton, Gregory; Hill, Ernie; Song, Aimin
2018-07-01
Due to a very high carrier concentration and low band gap, graphene based self-switching diodes do not demonstrate a very high rectification ratio. Despite that, it takes the advantage of graphene’s high carrier mobility and has been shown to work at very high microwave frequencies. However, the AC component of these devices is hidden in the very linear current–voltage characteristics. Here, we extract and quantitatively study the device capacitance that determines the device nonlinearity by implementing a conformal mapping technique. The estimated value of the nonlinear component or curvature coefficient from DC results based on Shichman–Hodges model predicts the rectified output voltage, which is in good agreement with the experimental RF results.
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Hybrid estimation of complex systems.
Hofbaur, Michael W; Williams, Brian C
2004-10-01
Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.
Briggs, Brandi N; Stender, Michael E; Muljadi, Patrick M; Donnelly, Meghan A; Winn, Virginia D; Ferguson, Virginia L
2015-06-25
Clinical practice requires improved techniques to assess human cervical tissue properties, especially at the internal os, or orifice, of the uterine cervix. Ultrasound elastography (UE) holds promise for non-invasively monitoring cervical stiffness throughout pregnancy. However, this technique provides qualitative strain images that cannot be linked to a material property (e.g., Young's modulus) without knowledge of the contact pressure under a rounded transvaginal transducer probe and correction for the resulting non-uniform strain dissipation. One technique to standardize elastogram images incorporates a material of known properties and uses one-dimensional, uniaxial Hooke's law to calculate Young's modulus within the compressed material half-space. However, this method does not account for strain dissipation and the strains that evolve in three-dimensional space. We demonstrate that an analytical approach based on 3D Hertzian contact mechanics provides a reasonable first approximation to correct for UE strain dissipation underneath a round transvaginal transducer probe and thus improves UE-derived estimates of tissue modulus. We validate the proposed analytical solution and evaluate sources of error using a finite element model. As compared to 1D uniaxial Hooke's law, the Hertzian contact-based solution yields significantly improved Young's modulus predictions in three homogeneous gelatin tissue phantoms possessing different moduli. We also demonstrate the feasibility of using this technique to image human cervical tissue, where UE-derived moduli estimations for the uterine cervix anterior lip agreed well with published, experimentally obtained values. Overall, UE with an attached reference standard and a Hertzian contact-based correction holds promise for improving quantitative estimates of cervical tissue modulus. Copyright © 2015 Elsevier Ltd. All rights reserved.
Weight estimation techniques for composite airplanes in general aviation industry
NASA Technical Reports Server (NTRS)
Paramasivam, T.; Horn, W. J.; Ritter, J.
1986-01-01
Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.
Robust regression on noisy data for fusion scaling laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
Precise Image-Based Motion Estimation for Autonomous Small Body Exploration
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Matthies, Larry H.
1998-01-01
Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.
Estimating the number of double-strand breaks formed during meiosis from partial observation.
Toyoizumi, Hiroshi; Tsubouchi, Hideo
2012-12-01
Analyzing the basic mechanism of DNA double-strand breaks (DSB) formation during meiosis is important for understanding sexual reproduction and genetic diversity. The location and amount of meiotic DSBs can be examined by using a common molecular biological technique called Southern blotting, but only a subset of the total DSBs can be observed; only DSB fragments still carrying the region recognized by a Southern blot probe are detected. With the assumption that DSB formation follows a nonhomogeneous Poisson process, we propose two estimators of the total number of DSBs on a chromosome: (1) an estimator based on the Nelson-Aalen estimator, and (2) an estimator based on a record value process. Further, we compared their asymptotic accuracy.
Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-04-01
The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.
Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-01-01
The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402
Duncan C. Lutes; Robert E. Keane
2006-01-01
The Fuel Load method (FL) is used to sample dead and down woody debris, determine depth of the duff/ litter profile, estimate the proportion of litter in the profile, and estimate total vegetative cover and dead vegetative cover. Down woody debris (DWD) is sampled using the planar intercept technique based on the methodology developed by Brown (1974). Pieces of dead...
Due to the lack of an analytical technique for directly quantifying the atmospheric concentrations of primary (OCpri) and secondary (OCsec) organic carbon aerosols, different indirect methods have been developed to estimate their concentrations. In this stu...
Salary Structure Effects and the Gender Pay Gap in Academia
ERIC Educational Resources Information Center
Barbezat, Debra A.; Hughes, James W.
2005-01-01
This paper presents estimates of the gender salary gap and discrimination based on the most recent national faculty survey data. New estimates for 1999 indicate that male faculty members still earn 20.7% more than comparable female colleagues. Depending upon which decomposition technique is employed, the portion of this gap attributable to…
ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.
ERIC Educational Resources Information Center
Vale, C. David; Gialluca, Kathleen A.
ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…
FPGA-based architecture for motion recovering in real-time
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar
2002-03-01
A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.
Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric
2012-12-01
As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.
Pimkumwong, Narongrit; Wang, Ming-Shyan
2018-02-01
This paper presents another control method for the three-phase induction motor that is direct torque control based on constant voltage per frequency control technique. This method uses the magnitude of stator flux and torque errors to generate the stator voltage and phase angle references for controlling the induction motor by using constant voltage per frequency control method. Instead of hysteresis comparators and optimum switching table, the PI controllers and space vector modulation technique are used to reduce torque and stator-flux ripples and achieve constant switching frequency. Moreover, the coordinate transformations are not required. To implement this control method, a full-order observer is used to estimate stator flux and overcome the problems from drift and saturation in using pure integrator. The feedback gains are designed by simple manner to improve the convergence of stator flux estimation, especially in low speed range. Furthermore, the necessary conditions to maintain the stability for feedback gain design are introduced. The simulation and experimental results show accurate and stable operation of the introduced estimator and good dynamic response of the proposed control method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Electroencephalography signatures of attention-deficit/hyperactivity disorder: clinical utility.
Alba, Guzmán; Pereda, Ernesto; Mañas, Soledad; Méndez, Leopoldo D; González, Almudena; González, Julián J
2015-01-01
The techniques and the most important results on the use of electroencephalography (EEG) to extract different measures are reviewed in this work, which can be clinically useful to study subjects with attention-deficit/hyperactivity disorder (ADHD). First, we discuss briefly and in simple terms the EEG analysis and processing techniques most used in the context of ADHD. We review techniques that both analyze individual EEG channels (univariate measures) and study the statistical interdependence between different EEG channels (multivariate measures), the so-called functional brain connectivity. Among the former ones, we review the classical indices of absolute and relative spectral power and estimations of the complexity of the channels, such as the approximate entropy and the Lempel-Ziv complexity. Among the latter ones, we focus on the magnitude square coherence and on different measures based on the concept of generalized synchronization and its estimation in the state space. Second, from a historical point of view, we present the most important results achieved with these techniques and their clinical utility (sensitivity, specificity, and accuracy) to diagnose ADHD. Finally, we propose future research lines based on these results.
NASA Technical Reports Server (NTRS)
Reddy C. J.
1998-01-01
Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks
NASA Astrophysics Data System (ADS)
Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min
2015-10-01
Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
Carmena, Jose M.
2016-01-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820
NASA Astrophysics Data System (ADS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato
2017-12-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.
Limitation of Ground-based Estimates of Solar Irradiance Due to Atmospheric Variations
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Cahalan, Robert F.; Holben, Brent N.
2003-01-01
The uncertainty in ground-based estimates of solar irradiance is quantitatively related to the temporal variability of the atmosphere's optical thickness. The upper and lower bounds of the accuracy of estimates using the Langley Plot technique are proportional to the standard deviation of aerosol optical thickness (approx. +/- 13 sigma(delta tau)). The estimates of spectral solar irradiance (SSI) in two Cimel sun photometer channels from the Mauna Loa site of AERONET are compared with satellite observations from SOLSTICE (Solar Stellar Irradiance Comparison Experiment) on UARS (Upper Atmospheric Research Satellite) for almost two years of data. The true solar variations related to the 27-day solar rotation cycle observed from SOLSTICE are about 0.15% at the two sun photometer channels. The variability in ground-based estimates is statistically one order of magnitude larger. Even though about 30% of these estimates from all Level 2.0 Cimel data fall within the 0.4 to approx. 0.5% variation level, ground-based estimates are not able to capture the 27-day solar variation observed from SOLSTICE.
Taking Stock of Circumboreal Forest Carbon With Ground Measurements, Airborne and Spaceborne LiDAR
NASA Technical Reports Server (NTRS)
Neigh, Christopher S. R.; Nelson, Ross F.; Ranson, K. Jon; Margolis, Hank A.; Montesano, Paul M.; Sun, Guoqing; Kharuk, Viacheslav; Naesset, Erik; Wulder, Michael A.; Andersen, Hans-Erik
2013-01-01
The boreal forest accounts for one-third of global forests, but remains largely inaccessible to ground-based measurements and monitoring. It contains large quantities of carbon in its vegetation and soils, and research suggests that it will be subject to increasingly severe climate-driven disturbance. We employ a suite of ground-, airborne- and space-based measurement techniques to derive the first satellite LiDAR-based estimates of aboveground carbon for the entire circumboreal forest biome. Incorporating these inventory techniques with uncertainty analysis, we estimate total aboveground carbon of 38 +/- 3.1 Pg. This boreal forest carbon is mostly concentrated from 50 to 55degN in eastern Canada and from 55 to 60degN in eastern Eurasia. Both of these regions are expected to warm >3 C by 2100, and monitoring the effects of warming on these stocks is important to understanding its future carbon balance. Our maps establish a baseline for future quantification of circumboreal carbon and the described technique should provide a robust method for future monitoring of the spatial and temporal changes of the aboveground carbon content.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira
2015-09-17
In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.
Breast surface estimation for radar-based breast imaging systems.
Williams, Trevor C; Sill, Jeff M; Fear, Elise C
2008-06-01
Radar-based microwave breast-imaging techniques typically require the antennas to be placed at a certain distance from or on the breast surface. This requires prior knowledge of the breast location, shape, and size. The method proposed in this paper for obtaining this information is based on a modified tissue sensing adaptive radar algorithm. First, a breast surface detection scan is performed. Data from this scan are used to localize the breast by creating an estimate of the breast surface. If required, the antennas may then be placed at specified distances from the breast surface for a second tumor-sensing scan. This paper introduces the breast surface estimation and antenna placement algorithms. Surface estimation and antenna placement results are demonstrated on three-dimensional breast models derived from magnetic resonance images.
Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1979-01-01
An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.
Handwritten document age classification based on handwriting styles
NASA Astrophysics Data System (ADS)
Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu
2012-01-01
Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R.
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases. PMID:22448233
Prediction of field emitter cathode lifetime based on measurement of I- V curves
NASA Astrophysics Data System (ADS)
Bormashov, V. S.; Nikolski, K. N.; Baturin, A. S.; Sheshin, E. P.
2003-06-01
A technique is presented, which allows the prediction of field emitter cathode lifetime without long-term direct measurements of cathode parameters stability. This technique is based on periodic measurements of cathode I- V characteristics. Moreover, it allows performing a post-experiment optimization for the appropriate choice of the feedback system to provide a stable operation during a long time. The proposed technique was applied to study the emission properties of reticulated vitreous carbon (RVC) and thermo-enlarged graphite (TEG). For the given cathodes, the characteristic time of the cathode destruction was estimated.
Accurate low-cost methods for performance evaluation of cache memory systems
NASA Technical Reports Server (NTRS)
Laha, Subhasis; Patel, Janak H.; Iyer, Ravishankar K.
1988-01-01
Methods of simulation based on statistical techniques are proposed to decrease the need for large trace measurements and for predicting true program behavior. Sampling techniques are applied while the address trace is collected from a workload. This drastically reduces the space and time needed to collect the trace. Simulation techniques are developed to use the sampled data not only to predict the mean miss rate of the cache, but also to provide an empirical estimate of its actual distribution. Finally, a concept of primed cache is introduced to simulate large caches by the sampling-based method.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
NASA Astrophysics Data System (ADS)
Eldardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
NASA Astrophysics Data System (ADS)
Tran, H.; Mansfield, M. L.; Lyman, S. N.; O'Neil, T.; Jones, C. P.
2015-12-01
Emissions from produced-water treatment ponds are poorly characterized sources in oil and gas emission inventories that play a critical role in studying elevated winter ozone events in the Uintah Basin, Utah, U.S. Information gaps include un-quantified amounts and compositions of gases emitted from these facilities. The emitted gases are often known as volatile organic compounds (VOCs) which, beside nitrogen oxides (NOX), are major precursors for ozone formation in the near-surface layer. Field measurement campaigns using the flux-chamber technique have been performed to measure VOC emissions from a limited number of produced water ponds in the Uintah Basin of eastern Utah. Although the flux chamber provides accurate measurements at the point of sampling, it covers just a limited area of the ponds and is prone to altering environmental conditions (e.g., temperature, pressure). This fact raises the need to validate flux chamber measurements. In this study, we apply an inverse-dispersion modeling technique with evacuated canister sampling to validate the flux-chamber measurements. This modeling technique applies an initial and arbitrary emission rate to estimate pollutant concentrations at pre-defined receptors, and adjusts the emission rate until the estimated pollutant concentrations approximates measured concentrations at the receptors. The derived emission rates are then compared with flux-chamber measurements and differences are analyzed. Additionally, we investigate the applicability of the WATER9 wastewater emission model for the estimation of VOC emissions from produced-water ponds in the Uintah Basin. WATER9 estimates the emission of each gas based on properties of the gas, its concentration in the waste water, and the characteristics of the influent and treatment units. Results of VOC emission estimations using inverse-dispersion and WATER9 modeling techniques will be reported.
NASA Astrophysics Data System (ADS)
Yamazaki, Takaharu; Futai, Kazuma; Tomita, Tetsuya; Sato, Yoshinobu; Yoshikawa, Hideki; Tamura, Shinichi; Sugamoto, Kazuomi
2011-03-01
To achieve 3D kinematic analysis of total knee arthroplasty (TKA), 2D/3D registration techniques, which use X-ray fluoroscopic images and computer-aided design (CAD) model of the knee implant, have attracted attention in recent years. These techniques could provide information regarding the movement of radiopaque femoral and tibial components but could not provide information of radiolucent polyethylene insert, because the insert silhouette on X-ray image did not appear clearly. Therefore, it was difficult to obtain 3D kinemaitcs of polyethylene insert, particularly mobile-bearing insert that move on the tibial component. This study presents a technique and the accuracy for 3D kinematic analysis of mobile-bearing insert in TKA using X-ray fluoroscopy, and finally performs clinical applications. For a 3D pose estimation technique of the mobile-bearing insert in TKA using X-ray fluoroscopy, tantalum beads and CAD model with its beads are utilized, and the 3D pose of the insert model is estimated using a feature-based 2D/3D registration technique. In order to validate the accuracy of the present technique, experiments including computer simulation test were performed. The results showed the pose estimation accuracy was sufficient for analyzing mobile-bearing TKA kinematics (the RMS error: about 1.0 mm, 1.0 degree). In the clinical applications, seven patients with mobile-bearing TKA in deep knee bending motion were studied and analyzed. Consequently, present technique enables us to better understand mobile-bearing TKA kinematics, and this type of evaluation was thought to be helpful for improving implant design and optimizing TKA surgical techniques.
Jung, R.E.; Royle, J. Andrew; Sauer, J.R.; Addison, C.; Rau, R.D.; Shirk, J.L.; Whissel, J.C.
2005-01-01
Stream salamanders in the family Plethodontidae constitute a large biomass in and near headwater streams in the eastern United States and are promising indicators of stream ecosystem health. Many studies of stream salamanders have relied on population indices based on counts rather than population estimates based on techniques such as capture-recapture and removal. Application of estimation procedures allows the calculation of detection probabilities (the proportion of total animals present that are detected during a survey) and their associated sampling error, and may be essential for determining salamander population sizes and trends. In 1999, we conducted capture-recapture and removal population estimation methods for Desmognathus salamanders at six streams in Shenandoah National Park, Virginia, USA. Removal sampling appeared more efficient and detection probabilities from removal data were higher than those from capture-recapture. During 2001-2004, we used removal estimation at eight streams in the park to assess the usefulness of this technique for long-term monitoring of stream salamanders. Removal detection probabilities ranged from 0.39 to 0.96 for Desmognathus, 0.27 to 0.89 for Eurycea and 0.27 to 0.75 for northern spring (Gyrinophilus porphyriticus) and northern red (Pseudotriton ruber) salamanders across stream transects. Detection probabilities did not differ across years for Desmognathus and Eurycea, but did differ among streams for Desmognathus. Population estimates of Desmognathus decreased between 2001-2002 and 2003-2004 which may be related to changes in stream flow conditions. Removal-based procedures may be a feasible approach for population estimation of salamanders, but field methods should be designed to meet the assumptions of the sampling procedures. New approaches to estimating stream salamander populations are discussed.
NASA Technical Reports Server (NTRS)
Klich, P. J.; Macconochie, I. O.
1979-01-01
A study of an array of advanced earth-to-orbit space transportation systems with a focus on mass properties and technology requirements is presented. Methods of estimating weights of these vehicles differ from those used for commercial and military aircraft; the new techniques emphasizing winged horizontal and vertical takeoff advanced systems are described utilizing the space shuttle subsystem data base for the weight estimating equations. The weight equations require information on mission profile, the structural materials, the thermal protection system, and the ascent propulsion system, allowing for the type of construction and various propellant tank shapes. The overall system weights are calculated using this information and incorporated into the Systems Engineering Mass Properties Computer Program.
Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.
A new technique for fire risk estimation in the wildland urban interface
NASA Astrophysics Data System (ADS)
Dasgupta, S.; Qu, J. J.; Hao, X.
A novel technique based on the physical variable of pre-ignition energy is proposed for assessing fire risk in the Grassland-Urban-Interface The physical basis lends meaning a site and season independent applicability possibilities for computing spread rates and ignition probabilities features contemporary fire risk indices usually lack The method requires estimates of grass moisture content and temperature A constrained radiative-transfer inversion scheme on MODIS NIR-SWIR reflectances which reduces solution ambiguity is used for grass moisture retrieval while MODIS land surface temperature emissivity products are used for retrieving grass temperature Subpixel urban contamination of the MODIS reflective and thermal signals over a Grassland-Urban-Interface pixel is corrected using periodic estimates of urban influence from high spatial resolution ASTER
NASA Astrophysics Data System (ADS)
Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen
2015-01-01
Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.
Estimating Driving Performance Based on EEG Spectrum Analysis
NASA Astrophysics Data System (ADS)
Lin, Chin-Teng; Wu, Ruei-Cheng; Jung, Tzyy-Ping; Liang, Sheng-Fu; Huang, Teng-Yi
2005-12-01
The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.
NASA Astrophysics Data System (ADS)
Poulain, Pierre-Marie; Luther, Douglas S.; Patzert, William C.
1992-11-01
Two techniques have been developed for estimating statistics of inertial oscillations from satellite-tracked drifters. These techniques overcome the difficulties inherent in estimating such statistics from data dependent upon space coordinates that are a function of time. Application of these techniques to tropical surface drifter data collected during the NORPAX, EPOCS, and TOGA programs reveals a latitude-dependent, statistically significant "blue shift" of inertial wave frequency. The latitudinal dependence of the blue shift is similar to predictions based on "global" internal wave spectral models, with a superposition of frequency shifting due to modification of the effective local inertial frequency by the presence of strongly sheared zonal mean currents within 12° of the equator.
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
NASA Astrophysics Data System (ADS)
Morton, F. I.
1983-10-01
Reliable estimates of areal evapotranspiration are essential to significant improvements in the science and practice of hydrology. Direct measurements, such as those provided by lysimeters, eddy flux instrumentation or Bowen-ratio instrumentation, give point values, require constant attendance by skilled personnel and are based on unverified assumptions. A critical review of the methods used for estimating areal evapotranspiration indicates that the conventional conceptual techniques, such as those used in current watershed models, are based on assumptions that are completely divorced from reality; and that causal techniques based on processes and interactions in the soil-plant-atmosphere system are not likely to prove useful for another generation. However, the complementary relationship can do much to fill the gap until such time as causal techniques become practicable because it provides the basis for models that permit areal evapotranspiration to be estimated from its effects on the routine climatological observations needed to estimate potential evapotranspiration. Such models have a realistic conceptual and empirical basis, by-pass the complexity of the soil-plant system and require no local calibration of coefficients. Therefore, they are falsifiable (i.e. can be tested rigorously) so that errors in the associated assumptions and relationships can be detected and corrected by progressive testing over an ever-widening range of environments. Such a methodology uses the entire world as a laboratory and requires that a correction made to obtain agreement between model and river-basin water budget estimates in one environment must be applicable without modification in all other environment. The most recent version of the complementary relationship areal evapotranspiration (CRAE) models is formulated and documented. The reliability of the independent operational estimates of areal evapotranspiration is tested with comparable long-term water-budget estimates for 143 river basins in North America, Africa, Ireland, Australia and New Zealand. The practicality and potential impact of such estimates are demonstrated with examples which show how the availability of such estimates can revitalize the science and practice of hydrology by providing a reliable basis for detailed water-balance studies; for further research on the development of causal models; for hydrological, agricultural and fire hazard forecasts; for detecting the development of errors in hydrometeorological records; for detecting and monitoring the effects of land-use changes; for explaining hydrologic anomalies; and for other better known applications. It is suggested that the collection of the required climatological data by hydrometric agencies could be justified on the grounds that the agencies would gain a technique for quality control and the users would gain by a significant expansion in the information content of the hydrometric data, all at minimal additional expense.
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.
2014-04-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
NASA Technical Reports Server (NTRS)
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C
2018-06-01
Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.
Efficient dense blur map estimation for automatic 2D-to-3D conversion
NASA Astrophysics Data System (ADS)
Vosters, L. P. J.; de Haan, G.
2012-03-01
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.
Histogram equalization with Bayesian estimation for noise robust speech recognition.
Suh, Youngjoo; Kim, Hoirin
2018-02-01
The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.
The association of placenta previa and assisted reproductive techniques: a meta-analysis.
Karami, Manoochehr; Jenabi, Ensiyeh; Fereidooni, Bita
2018-07-01
Several epidemiological studies have determined that assisted reproductive techniques (ART) can increase the risk of placenta previa. To date, only a meta-analysis has been performed for assessing the relationship between placenta previa and ART. This meta-analysis was conducted to estimate the association between placenta previa and ART in singleton and twin pregnancies. A literature search was performed in major databases PubMed, Web of Science, and Scopus from the earliest possible year to April 2017. The heterogeneity across studies was explored by Q-test and I 2 statistic. The publication bias was assessed using Begg's and Egger's tests. The results were reported using odds ratio (OR) and relative risk (RR) estimates with its 95% confidence intervals (CI) using a random-effects model. The literature search yielded 1529 publications until September 2016 with 1,388,592 participants. The overall estimate of OR was 2.67 (95%CI: 2.01, 3.34) and RR was 3.62 (95%CI: 0.21, 7.03) based on singleton pregnancies. The overall estimate of OR was 1.50 (95%CI: 1.26, 1.74) based on twin pregnancies. We showed based on odds ratio reports in observational studies that ART procedures are a risk factor for placenta previa.
McCrea, C; Neil, W J; Flanigan, J W; Summerfield, A B
1988-08-01
In this study a new modified videosystem, designed for measuring body-image, was evaluated alongside the major size-estimation measure, namely, the visual size-estimation apparatus. The advantages afforded by a videosystem which allows independent adjustment of size and height/width proportions were highlighted, and its validity and reliability were examined, based on estimates made by obese, normal weight, and pregnant groups.
An activity-based methodology for operations cost analysis
NASA Technical Reports Server (NTRS)
Korsmeyer, David; Bilby, Curt; Frizzell, R. A.
1991-01-01
This report describes an activity-based cost estimation method, proposed for the Space Exploration Initiative (SEI), as an alternative to NASA's traditional mass-based cost estimation method. A case study demonstrates how the activity-based cost estimation technique can be used to identify the operations that have a significant impact on costs over the life cycle of the SEI. The case study yielded an operations cost of $101 billion for the 20-year span of the lunar surface operations for the Option 5a program architecture. In addition, the results indicated that the support and training costs for the missions were the greatest contributors to the annual cost estimates. A cost-sensitivity analysis of the cultural and architectural drivers determined that the length of training and the amount of support associated with the ground support personnel for mission activities are the most significant cost contributors.
Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei
2018-05-09
Advances in structural finite element analysis (FEA) and medical imaging have made it possible to investigate the in vivo biomechanics of human organs such as blood vessels, for which organ geometries at the zero-pressure level need to be recovered. Although FEA-based inverse methods are available for zero-pressure geometry estimation, these methods typically require iterative computation, which are time-consuming and may be not suitable for time-sensitive clinical applications. In this study, by using machine learning (ML) techniques, we developed an ML model to estimate the zero-pressure geometry of human thoracic aorta given 2 pressurized geometries of the same patient at 2 different blood pressure levels. For the ML model development, a FEA-based method was used to generate a dataset of aorta geometries of 3125 virtual patients. The ML model, which was trained and tested on the dataset, is capable of recovering zero-pressure geometries consistent with those generated by the FEA-based method. Thus, this study demonstrates the feasibility and great potential of using ML techniques as a fast surrogate of FEA-based inverse methods to recover zero-pressure geometries of human organs. Copyright © 2018 John Wiley & Sons, Ltd.
Sim, K S; Kiani, M A; Nia, M E; Tso, C P
2014-01-01
A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
NASA Technical Reports Server (NTRS)
Pifer, Alburt E.; Hiscox, William L.; Cummins, Kenneth L.; Neumann, William T.
1991-01-01
Gated, wideband, magnetic direction finders (DFs) were originally designed to measure the bearing of cloud-to-ground lightning relative to the sensor. A recent addition to this device uses proprietary waveform discrimination logic to select return stroke signatures and certain range dependent features in the waveform to provide an estimate of range of flashes within 50 kms. The enhanced ranging techniques are discussed which were designed and developed for use in single station thunderstorm warning sensor. Included are the results of on-going evaluations being conducted under a variety of meteorological and geographic conditions.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
Power system frequency estimation based on an orthogonal decomposition method
NASA Astrophysics Data System (ADS)
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters
NASA Astrophysics Data System (ADS)
Vasumathi, B.; Moorthi, S.
2011-11-01
In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.
Radar data smoothing filter study
NASA Technical Reports Server (NTRS)
White, J. V.
1984-01-01
The accuracy of the current Wallops Flight Facility (WFF) data smoothing techniques for a variety of radars and payloads is examined. Alternative data reduction techniques are given and recommendations are made for improving radar data processing at WFF. A data adaptive algorithm, based on Kalman filtering and smoothing techniques, is also developed for estimating payload trajectories above the atmosphere from noisy time varying radar data. This algorithm is tested and verified using radar tracking data from WFF.
Fee, David; Izbekov, Pavel; Kim, Keehoon; ...
2017-10-09
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
Waveform inversion of acoustic waves for explosion yield estimation
Kim, K.; Rodgers, A. J.
2016-07-08
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
[Considerations when using creatinine as a measure of kidney function].
Drion, I Iefke; Fokkert, M J Marion; Bilo, H J G Henk
2013-01-01
Reported serum creatinine concentrations can sometimes vary considerably, even when the renal function does less so or even not. This variation is partly due to true changes in actual serum concentration, and partly due to interferences in the measurement technique, thus not reflecting a true change in concentration. Increased or decreased endogenous creatinine production, ingested creatinine sources through meat eating or certain creatine formulations, and interference by either browning of chromogenic substances in Jaffe measurement techniques or promotors and inhibitors of enzymatic reaction methods do play a role. Reliable serum creatinine measurements are needed for renal function estimating equations. In screening circumstances and daily practice, chronic kidney disease staging is based on these estimated glomerular filtration rate values. Given the possible influences on reported serum creatinine concentrations, it is important for health care workers to remain critical when interpreting outcomes of renal function estimating equations and to not see every reported result based on an equation as a true reflection of renal function.
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
Waveform inversion of acoustic waves for explosion yield estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Rodgers, A. J.
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fee, David; Izbekov, Pavel; Kim, Keehoon
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil
2014-08-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922
Techniques for estimating magnitude and frequency of floods on streams in Indiana
Glatfelter, D.R.
1984-01-01
A rainfall-runoff model was tlsed to synthesize long-term peak data at 11 gaged locations on small streams. Flood-frequency curves developed from the long-term synthetic data were combined with curves based on short-term observed data to provide weighted estimates of flood magnitude and frequency at the rainfall-runoff stations.
Estimating Power System Dynamic States Using Extended Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw
2014-10-31
Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
A phase-based stereo vision system-on-a-chip.
Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia
2007-02-01
A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.
2006-01-01
The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.
1986-07-01
Reserves staffing. A comparison with the Navy technique, which requires estimates of mileage and operating hours, was not possible since these data were...Directorate of Engineering and Housing (DEH) vehicle maintenance activities. To make comparisons, manpower requirements data totaling two megabytes of
Peter R. Robichaud
1997-01-01
Geostatistics provides a method to describe the spatial continuity of many natural phenomena. Spatial models are based upon the concept of scaling, kriging and conditional simulation. These techniques were used to describe the spatially-varied surface conditions on timber harvest and burned hillslopes. Geostatistical techniques provided estimates of the ground cover (...
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2011-12-01
Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future.
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.
Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1994-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the Special Sensor Microwave/Imager (SSM/I) for the preiod from July 1987 through December 1991. The monthly estimates were calibrated using measurements from a network of Pacific atoll rain gauges and compared to other satellite-based rainfall estimation techniques. Based on these monthly estimates, an analysis of the variability of large-scale features over intraseasonal to interannual timescales has been performed. While the major precipitation features as well as the seasonal variability distributions show good agreement with expected values, the presence of a moderately intense El Nino during 1986-87 and an intense La Nina during 1988-89 highlights this time period.
Survival of European mouflon (Artiodactyla: Bovidae) in Hawai'i based on tooth cementum lines
Hess, S.C.; Stephens, R.M.; Thompson, T.L.; Danner, R.M.; Kawakami, B.
2011-01-01
Reliable techniques for estimating age of ungulates are necessary to determine population parameters such as age structure and survival. Techniques that rely on dentition, horn, and facial patterns have limited utility for European mouflon sheep (Ovis gmelini musimon), but tooth cementum lines may offer a useful alternative. Cementum lines may not be reliable outside temperate regions, however, because lack of seasonality in diet may affect annulus formation. We evaluated the utility of tooth cementum lines for estimating age of mouflon in Hawai'i in comparison to dentition. Cementum lines were present in mouflon from Mauna Loa, island of Hawai'i, but were less distinct than in North American sheep. The two age-estimation methods provided similar estimates for individuals aged ???3 yr by dentition (the maximum age estimable by dentition), with exact matches in 51% (18/35) of individuals, and an average difference of 0.8 yr (range 04). Estimates of age from cementum lines were higher than those from dentition in 40% (14/35) and lower in 9% (3/35) of individuals. Discrepancies in age estimates between techniques and between paired tooth samples estimated by cementum lines were related to certainty categories assigned by the clarity of cementum lines, reinforcing the importance of collecting a sufficient number of samples to compensate for samples of lower quality, which in our experience, comprised approximately 22% of teeth. Cementum lines appear to provide relatively accurate age estimates for mouflon in Hawai'i, allow estimating age beyond 3 yr, and they offer more precise estimates than tooth eruption patterns. After constructing an age distribution, we estimated annual survival with a log-linear model to be 0.596 (95% CI 0.5540.642) for this heavily controlled population. ?? 2011 by University of Hawai'i Press.
Funk, Christopher C.; Michaelsen, Joel C.
2004-01-01
An extension of Sinclair's diagnostic model of orographic precipitation (“VDEL”) is developed for use in data-poor regions to enhance rainfall estimates. This extension (VDELB) combines a 2D linearized internal gravity wave calculation with the dot product of the terrain gradient and surface wind to approximate terrain-induced vertical velocity profiles. Slope, wind speed, and stability determine the velocity profile, with either sinusoidal or vertically decaying (evanescent) solutions possible. These velocity profiles replace the parameterized functions in the original VDEL, creating VDELB, a diagnostic accounting for buoyancy effects. A further extension (VDELB*) uses an on/off constraint derived from reanalysis precipitation fields. A validation study over 365 days in the Pacific Northwest suggests that VDELB* can best capture seasonal and geographic variations. A new statistical data-fusion technique is presented and is used to combine VDELB*, reanalysis, and satellite rainfall estimates in southern Africa. The technique, matched filter regression (MFR), sets the variance of the predictors equal to their squared correlation with observed gauge data and predicts rainfall based on the first principal component of the combined data. In the test presented here, mean absolute errors from the MFR technique were 35% lower than the satellite estimates alone. VDELB assumes a linear solution to the wave equations and a Boussinesq atmosphere, and it may give unrealistic responses under extreme conditions. Nonetheless, the results presented here suggest that diagnostic models, driven by reanalysis data, can be used to improve satellite rainfall estimates in data-sparse regions.
Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy
2013-01-01
Background: Errors in address geocodes may affect estimates of the effects of air pollution on health. Objective: We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. Methods: We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant’s address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Results: Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: –0.56, –6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: –0.14, –3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted –2.81 (95% CI: –0.26, –5.35) using NavTEQ, or 2.08 (95% CI –4.63, 0.47, p = 0.11) using Google Maps]. Conclusions: Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model. Citation: Jacquemin B, Lepeule J, Boudier A, Arnould C, Benmerad M, Chappaz C, Ferran J, Kauffmann F, Morelli X, Pin I, Pison C, Rios I, Temam S, Künzli N, Slama R, Siroux V. 2013. Impact of geocoding methods on associations between long-term exposure to urban air pollution and lung function. Environ Health Perspect 121:1054–1060; http://dx.doi.org/10.1289/ehp.1206016 PMID:23823697
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.
2016-12-01
This research contributes to the improvement of high resolution satellite applications in tropical regions with mountainous topography. Such mountainous regions are usually covered by sparse networks of in-situ observations while quantitative precipitation estimation from satellite sensors exhibits strong underestimation of heavy orographically enhanced storm events. To address this issue, our research applies a satellite error correction technique based solely on high-resolution numerical weather predictions (NWP). Our previous work has demonstrated the accuracy of this method in two mid-latitude mountainous regions (Zhang et al. 2013*1, Zhang et al. 2016*2), while the current research focuses on a comprehensive evaluation in three topical mountainous regions: Colombia, Peru and Taiwan. In addition, two different satellite precipitation products, NOAA Climate Prediction Center morphing technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), are considered. The study includes a large number of heavy precipitation events (68 events over the three regions) in the period 2004 to 2012. The NWP-based adjustments of the two satellite products are contrasted to their corresponding gauge-adjusted post-processing products. Preliminary results show that the NWP-based adjusted CMORPH product is consistently improved relative to both original and gauge-adjusted precipitation products for all regions and storms examined. The improvement of PERSIANN-CCS product is less significant and less consistent relative to the CMORPH performance improvements from the NWP-based adjustment. *1Zhang, Xinxuan, Emmanouil N. Anagnostou, Maria Frediani, Stavros Solomos, and George Kallos. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14, no. 6 (2013): 1844-1858.*2 Zhang, Xinxuan, Emmanouil N. Anagnostou, and Humberto Vergara. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099.
Conceptual Design of a Communication-Based Deep Space Navigation Network
NASA Technical Reports Server (NTRS)
Anzalone, Evan J.; Chuang, C. H.
2012-01-01
As the need grows for increased autonomy and position knowledge accuracy to support missions beyond Earth orbit, engineers must push and develop more advanced navigation sensors and systems that operate independent of Earth-based analysis and processing. Several spacecraft are approaching this problem using inter-spacecraft radiometric tracking and onboard autonomous optical navigation methods. This paper proposes an alternative implementation to aid in spacecraft position fixing. The proposed method Network-Based Navigation technique takes advantage of the communication data being sent between spacecraft and between spacecraft and ground control to embed navigation information. The navigation system uses these packets to provide navigation estimates to an onboard navigation filter to augment traditional ground-based radiometric tracking techniques. As opposed to using digital signal measurements to capture inherent information of the transmitted signal itself, this method relies on the embedded navigation packet headers to calculate a navigation estimate. This method is heavily dependent on clock accuracy and the initial results show the promising performance of a notional system.
Wasza, Jakob; Bauer, Sebastian; Hornegger, Joachim
2012-01-01
Over the last years, range imaging (RI) techniques have been proposed for patient positioning and respiration analysis in motion compensation. Yet, current RI based approaches for patient positioning employ rigid-body transformations, thus neglecting free-form deformations induced by respiratory motion. Furthermore, RI based respiration analysis relies on non-rigid registration techniques with run-times of several seconds. In this paper we propose a real-time framework based on RI to perform respiratory motion compensated positioning and non-rigid surface deformation estimation in a joint manner. The core of our method are pre-procedurally obtained 4-D shape priors that drive the intra-procedural alignment of the patient to the reference state, simultaneously yielding a rigid-body table transformation and a free-form deformation accounting for respiratory motion. We show that our method outperforms conventional alignment strategies by a factor of 3.0 and 2.3 in the rotation and translation accuracy, respectively. Using a GPU based implementation, we achieve run-times of 40 ms.
Zioupos, P; Williams, A; Christodoulou, G; Giles, R
2014-05-01
Determination of age-at-death (AAD) is an important and frequent requirement in contemporary forensic science and in the reconstruction of past populations and societies from their remains. Its estimation is relatively straightforward and accurate (±3yr) for immature skeletons by using morphological features and reference tables within the context of forensic anthropology. However, after skeletal maturity (>35yr) estimates become inaccurate, particularly in the legal context. In line with the general migration of all the forensic sciences from reliance upon empirical criteria to those which are more evidence-based, AAD determination should rely more-and-more upon more quantitative methods. We explore here whether well-known changes in the biomechanical properties of bone and the properties of bone matrix, which have been seen to change with age even after skeletal maturity in a traceable manner, can be used to provide a reliable estimate of AAD. This method charts a combination of physical characteristics some of which are measured at a macroscopic level (wet & dry apparent density, porosity, organic/mineral/water fractions, collagen thermal degradation properties, ash content) and others at the microscopic level (Ca/P ratios, osteonal and matrix microhardness, image analysis of sections). This method produced successful age estimates on a cohort of 12 donors of age 53-85yr (7 male, 5 female), where the age of the individual could be approximated within less than ±1yr. This represents a vastly improved level of accuracy than currently extant age estimation techniques. It also presents: (1) a greater level of reliability and objectivity as the results are not dependent on the experience and expertise of the observer, as is so often the case in forensic skeletal age estimation methods; (2) it is purely laboratory-based analytical technique which can be carried out by someone with technical skills and not the specialised forensic anthropology experience; (3) it can be applied worldwide following stringent laboratory protocols. As such, this technique contributes significantly to improving age estimation and therefore identification methods for forensic and other purposes. © 2013 Elsevier Ltd. All rights reserved.
Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon
2018-01-01
Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.
Estimation and enhancement of real-time software reliability through mutation analysis
NASA Technical Reports Server (NTRS)
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Measuring survival time: a probability-based approach useful in healthcare decision-making.
2011-01-01
In some clinical situations, the choice between treatment options takes into account their impact on patient survival time. Due to practical constraints (such as loss to follow-up), survival time is usually estimated using a probability calculation based on data obtained in clinical studies or trials. The two techniques most commonly used to estimate survival times are the Kaplan-Meier method and the actuarial method. Despite their limitations, they provide useful information when choosing between treatment options.
Harris, Wendy; Zhang, You; Yin, Fang-Fang; Ren, Lei
2017-01-01
Purpose To investigate the feasibility of using structural-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. Methods A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion-model extracted by global PCA and free-form deformation (GMM-FD) technique, using a data fidelity constraint and deformation energy minimization. In this study, a new structural-PCA method was developed to build a structural motion-model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume to evaluate the method. The estimation accuracy was evaluated by the Volume-Percent-Difference (VPD)/Center-of-Mass-Shift (COMS) between lesions in the estimated and “ground-truth” on board 4D-CBCT. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. The method was also evaluated against 3 lung patients. Results The SMM-WFD method achieved substantially better accuracy than the GMM-FD method for CBCT estimation using extremely small scan angles or projections. Using orthogonal 15° scanning angles, the VPD/COMS were 3.47±2.94% and 0.23±0.22mm for SMM-WFD and 25.23±19.01% and 2.58±2.54mm for GMM-FD among all 8 XCAT scenarios. Compared to GMM-FD, SMM-WFD was more robust against reduction of the scanning angles down to orthogonal 10° with VPD/COMS of 6.21±5.61% and 0.39±0.49mm, and more robust against reduction of projection numbers down to only 8 projections in total for both orthogonal-view 30° and orthogonal-view 15° scan angles. SMM-WFD method was also more robust than the GMM-FD method against increasing levels of noise in the projection images. Additionally, the SMM-WFD technique provided better tumor estimation for all three lung patients compared to the GMM-FD technique. Conclusion Compared to the GMM-FD technique, the SMM-WFD technique can substantially improve the 4D-CBCT estimation accuracy using extremely small scan angles and low number of projections to provide fast low dose 4D target verification. PMID:28079267
Mobility based key management technique for multicast security in mobile ad hoc networks.
Madhusudhanan, B; Chitra, S; Rajan, C
2015-01-01
In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality.
Cellular-based preemption system
NASA Technical Reports Server (NTRS)
Bachelder, Aaron D. (Inventor)
2011-01-01
A cellular-based preemption system that uses existing cellular infrastructure to transmit preemption related data to allow safe passage of emergency vehicles through one or more intersections. A cellular unit in an emergency vehicle is used to generate position reports that are transmitted to the one or more intersections during an emergency response. Based on this position data, the one or more intersections calculate an estimated time of arrival (ETA) of the emergency vehicle, and transmit preemption commands to traffic signals at the intersections based on the calculated ETA. Additional techniques may be used for refining the position reports, ETA calculations, and the like. Such techniques include, without limitation, statistical preemption, map-matching, dead-reckoning, augmented navigation, and/or preemption optimization techniques, all of which are described in further detail in the above-referenced patent applications.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
A semi-automatic method for left ventricle volume estimate: an in vivo validation study
NASA Technical Reports Server (NTRS)
Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.
2001-01-01
This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.
Techniques for estimating flood-peak discharges from urban basins in Missouri
Becker, L.D.
1986-01-01
Techniques are defined for estimating the magnitude and frequency of future flood peak discharges of rainfall-induced runoff from small urban basins in Missouri. These techniques were developed from an initial analysis of flood records of 96 gaged sites in Missouri and adjacent states. Final regression equations are based on a balanced, representative sampling of 37 gaged sites in Missouri. This sample included 9 statewide urban study sites, 18 urban sites in St. Louis County, and 10 predominantly rural sites statewide. Short-term records were extended on the basis of long-term climatic records and use of a rainfall-runoff model. Linear least-squares regression analyses were used with log-transformed variables to relate flood magnitudes of selected recurrence intervals (dependent variables) to selected drainage basin indexes (independent variables). For gaged urban study sites within the State, the flood peak estimates are from the frequency curves defined from the synthesized long-term discharge records. Flood frequency estimates are made for ungaged sites by using regression equations that require determination of the drainage basin size and either the percentage of impervious area or a basin development factor. Alternative sets of equations are given for the 2-, 5-, 10-, 25-, 50-, and 100-yr recurrence interval floods. The average standard errors of estimate range from about 33% for the 2-yr flood to 26% for the 100-yr flood. The techniques for estimation are applicable to flood flows that are not significantly affected by storage caused by manmade activities. Flood peak discharge estimating equations are considered applicable for sites on basins draining approximately 0.25 to 40 sq mi. (Author 's abstract)
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
A frame selective dynamic programming approach for noise robust pitch estimation.
Yarra, Chiranjeevi; Deshmukh, Om D; Ghosh, Prasanta Kumar
2018-04-01
The principles of the existing pitch estimation techniques are often different and complementary in nature. In this work, a frame selective dynamic programming (FSDP) method is proposed which exploits the complementary characteristics of two existing methods, namely, sub-harmonic to harmonic ratio (SHR) and sawtooth-wave inspired pitch estimator (SWIPE). Using variants of SHR and SWIPE, the proposed FSDP method classifies all the voiced frames into two classes-the first class consists of the frames where a confidence score maximization criterion is used for pitch estimation, while for the second class, a dynamic programming (DP) based approach is proposed. Experiments are performed on speech signals separately from KEELE, CSLU, and PaulBaghsaw corpora under clean and additive white Gaussian noise at 20, 10, 5, and 0 dB SNR conditions using four baseline schemes including SHR, SWIPE, and two DP based techniques. The pitch estimation performance of FSDP, when averaged over all SNRs, is found to be better than those of the baseline schemes suggesting the benefit of applying smoothness constraint using DP in selected frames in the proposed FSDP scheme. The VuV classification error from FSDP is also found to be lower than that from all four baseline schemes in almost all SNR conditions on three corpora.
Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks
NASA Technical Reports Server (NTRS)
Maskey, Manil; Cecil, Dan; Ramachandran, Rahul; Miller, Jeffrey J.
2018-01-01
Estimating tropical cyclone intensity by just using satellite image is a challenging problem. With successful application of the Dvorak technique for more than 30 years along with some modifications and improvements, it is still used worldwide for tropical cyclone intensity estimation. A number of semi-automated techniques have been derived using the original Dvorak technique. However, these techniques suffer from subjective bias as evident from the most recent estimations on October 10, 2017 at 1500 UTC for Tropical Storm Ophelia: The Dvorak intensity estimates ranged from T2.3/33 kt (Tropical Cyclone Number 2.3/33 knots) from UW-CIMSS (University of Wisconsin-Madison - Cooperative Institute for Meteorological Satellite Studies) to T3.0/45 kt from TAFB (the National Hurricane Center's Tropical Analysis and Forecast Branch) to T4.0/65 kt from SAB (NOAA/NESDIS Satellite Analysis Branch). In this particular case, two human experts at TAFB and SAB differed by 20 knots in their Dvorak analyses, and the automated version at the University of Wisconsin was 12 knots lower than either of them. The National Hurricane Center (NHC) estimates about 10-20 percent uncertainty in its post analysis when only satellite based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to tropical cyclone intensity. This study aims to utilize deep learning, the current state of the art in pattern recognition and image recognition, to address the need for an automated and objective tropical cyclone intensity estimation. Deep learning is a multi-layer neural network consisting of several layers of simple computational units. It learns discriminative features without relying on a human expert to identify which features are important. Our study mainly focuses on convolutional neural network (CNN), a deep learning algorithm, to develop an objective tropical cyclone intensity estimation. CNN is a supervised learning algorithm requiring a large number of training data. Since the archives of intensity data and tropical cyclone centric satellite images is openly available for use, the training data is easily created by combining the two. Results, case studies, prototypes, and advantages of this approach will be discussed.
Price estimates for the production of wafers from silicon ingots
NASA Technical Reports Server (NTRS)
Mokashi, A. R.
1982-01-01
The status of the inside-diameter sawing, (ID), multiblade sawing (MBS), and fixed-abrasive slicing technique (FAST) processes are discussed with respect to the estimated price each process adds on to the price of the final photovoltaic module. The expected improvements in each process, based on the knowledge of the current level of technology, are projected for the next two to five years and the expected add-on prices in 1983 and 1986 are estimated.
1996-09-01
Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
Efforts in support of the development of multicrop production monitoring capability are reported. In particular, segment level proportion estimation techniques based upon a mixture model were investigated. Efforts have dealt primarily with evaluation of current techniques and development of alternative ones. A comparison of techniques is provided on both simulated and LANDSAT data along with an analysis of the quality of profile variables obtained from LANDSAT data.
NASA Astrophysics Data System (ADS)
Belyaev, M. Yu.; Volkov, O. N.; Monakhov, M. I.; Sazonov, V. V.
2017-09-01
The paper has studied the accuracy of the technique that allows the rotational motion of the Earth artificial satellites (AES) to be reconstructed based on the data of onboard measurements of angular velocity vectors and the strength of the Earth magnetic field (EMF). The technique is based on kinematic equations of the rotational motion of a rigid body. Both types of measurement data collected over some time interval have been processed jointly. The angular velocity measurements have been approximated using convenient formulas, which are substituted into the kinematic differential equations for the quaternion that specifies the transition from the body-fixed coordinate system of a satellite to the inertial coordinate system. Thus obtained equations represent a kinematic model of the rotational motion of a satellite. The solution of these equations, which approximate real motion, has been found by the least-square method from the condition of best fitting between the data of measurements of the EMF strength vector and its calculated values. The accuracy of the technique has been estimated by processing the data obtained from the board of the service module of the International Space Station ( ISS). The reconstruction of station motion using the aforementioned technique has been compared with the telemetry data on the actual motion of the station. The technique has allowed us to reconstruct the station motion in the orbital orientation mode with a maximum error less than 0.6° and the turns with a maximal error of less than 1.2°.
Minimum Detectable Activity for Tomographic Gamma Scanning System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkataraman, Ram; Smith, Susan; Kirkpatrick, J. M.
2015-01-01
For any radiation measurement system, it is useful to explore and establish the detection limits and a minimum detectable activity (MDA) for the radionuclides of interest, even if the system is to be used at far higher values. The MDA serves as an important figure of merit, and often a system is optimized and configured so that it can meet the MDA requirements of a measurement campaign. The non-destructive assay (NDA) systems based on gamma ray analysis are no exception and well established conventions, such the Currie method, exist for estimating the detection limits and the MDA. However, the Tomographicmore » Gamma Scanning (TGS) technique poses some challenges for the estimation of detection limits and MDAs. The TGS combines high resolution gamma ray spectrometry (HRGS) with low spatial resolution image reconstruction techniques. In non-imaging gamma ray based NDA techniques measured counts in a full energy peak can be used to estimate the activity of a radionuclide, independently of other counting trials. However, in the case of the TGS each “view” is a full spectral grab (each a counting trial), and each scan consists of 150 spectral grabs in the transmission and emission scans per vertical layer of the item. The set of views in a complete scan are then used to solve for the radionuclide activities on a voxel by voxel basis, over 16 layers of a 10x10 voxel grid. Thus, the raw count data are not independent trials any more, but rather constitute input to a matrix solution for the emission image values at the various locations inside the item volume used in the reconstruction. So, the validity of the methods used to estimate MDA for an imaging technique such as TGS warrant a close scrutiny, because the pair-counting concept of Currie is not directly applicable. One can also raise questions as to whether the TGS, along with other image reconstruction techniques which heavily intertwine data, is a suitable method if one expects to measure samples whose activities are at or just above MDA levels. The paper examines methods used to estimate MDAs for a TGS system, and explores possible solutions that can be rigorously defended.« less
Singh, I S; Mishra, Lokpati; Yadav, J R; Nadar, M Y; Rao, D D; Pradeepkumar, K S
2015-10-01
The estimation of Pu/(241)Am ratio in the biological samples is an important input for the assessment of internal dose received by the workers. The radiochemical separation of Pu isotopes and (241)Am in a sample followed by alpha spectrometry is a widely used technique for the determination of Pu/(241)Am ratio. However, this method is time consuming and many times quick estimation is required. In this work, Pu/(241)Am ratio in the biological sample was estimated with HPGe detector based measurements using gamma/X-rays emitted by these radionuclides. These results were compared with those obtained from alpha spectroscopy of sample after radiochemical analysis and found to be in good agreement. Copyright © 2015 Elsevier Ltd. All rights reserved.
Investigation of safety analysis methods using computer vision techniques
NASA Astrophysics Data System (ADS)
Shirazi, Mohammad Shokrolah; Morris, Brendan Tran
2017-09-01
This work investigates safety analysis methods using computer vision techniques. The vision-based tracking system is developed to provide the trajectory of road users including vehicles and pedestrians. Safety analysis methods are developed to estimate time to collision (TTC) and postencroachment time (PET) that are two important safety measurements. Corresponding algorithms are presented and their advantages and drawbacks are shown through their success in capturing the conflict events in real time. The performance of the tracking system is evaluated first, and probability density estimation of TTC and PET are shown for 1-h monitoring of a Las Vegas intersection. Finally, an idea of an intersection safety map is introduced, and TTC values of two different intersections are estimated for 1 day from 8:00 a.m. to 6:00 p.m.
Three-dimensional analysis of magnetometer array data
NASA Technical Reports Server (NTRS)
Richmond, A. D.; Baumjohann, W.
1984-01-01
A technique is developed for mapping magnetic variation fields in three dimensions using data from an array of magnetometers, based on the theory of optimal linear estimation. The technique is applied to data from the Scandinavian Magnetometer Array. Estimates of the spatial power spectra for the internal and external magnetic variations are derived, which in turn provide estimates of the spatial autocorrelation functions of the three magnetic variation components. Statistical errors involved in mapping the external and internal fields are quantified and displayed over the mapping region. Examples of field mapping and of separation into external and internal components are presented. A comparison between the three-dimensional field separation and a two-dimensional separation from a single chain of stations shows that significant differences can arise in the inferred internal component.
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
Muscle categorization using PDF estimation and Naive Bayes classification.
Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W
2012-01-01
The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.
NASA Astrophysics Data System (ADS)
Popov, Evgeny; Popov, Yury; Spasennykh, Mikhail; Kozlova, Elena; Chekhonin, Evgeny; Zagranovskaya, Dzhuliya; Belenkaya, Irina; Alekseev, Aleksey
2016-04-01
A practical method of organic-rich intervals identifying within the low-permeable dispersive rocks based on thermal conductivity measurements along the core is presented. Non-destructive non-contact thermal core logging was performed with optical scanning technique on 4 685 full size core samples from 7 wells drilled in four low-permeable zones of the Bazhen formation (B.fm.) in the Western Siberia (Russia). The method employs continuous simultaneous measurements of rock anisotropy, volumetric heat capacity, thermal anisotropy coefficient and thermal heterogeneity factor along the cores allowing the high vertical resolution (of up to 1-2 mm). B.fm. rock matrix thermal conductivity was observed to be essentially stable within the range of 2.5-2.7 W/(m*K). However, stable matrix thermal conductivity along with the high thermal anisotropy coefficient is characteristic for B.fm. sediments due to the low rock porosity values. It is shown experimentally that thermal parameters measured relate linearly to organic richness rather than to porosity coefficient deviations. Thus, a new technique employing the transformation of the thermal conductivity profiles into continuous profiles of total organic carbon (TOC) values along the core was developed. Comparison of TOC values, estimated from the thermal conductivity values, with experimental pyrolytic TOC estimations of 665 samples from the cores using the Rock-Eval and HAWK instruments demonstrated high efficiency of the new technique for the organic rich intervals separation. The data obtained with the new technique are essential for the SR hydrocarbon generation potential, for basin and petroleum system modeling application, and estimation of hydrocarbon reserves. The method allows for the TOC richness to be accurately assessed using the thermal well logs. The research work was done with financial support of the Russian Ministry of Education and Science (unique identification number RFMEFI58114X0008).
Estimating wildland fire rate of spread in a spatially nonuniform environment
Francis M Fujioka
1985-01-01
Estimating rate of fire spread is a key element in planning for effective fire control. Land managers use the Rothermel spread model, but the model assumptions are violated when fuel, weather, and topography are nonuniform. This paper compares three averaging techniques--arithmetic mean of spread rates, spread based on mean fuel conditions, and harmonic mean of spread...
Non-Data Aided Doppler Shift Estimation for Underwater Acoustic Communication
2014-05-01
in underwater acoustic wireless sensor networks . We analyzed the data collected from our experiments using non-data aided (blind) techniques such as...investigated different methods for blind Doppler shift estimation and compensation for a single carrier in underwater acoustic wireless sensor ...distributed underwater sensor networks . Detailed experimental and simulated results based on second order cyclostationary features of the received signals
Modeling and estimation of tip contact force for steerable ablation catheters.
Khoshnam, Mahta; Skanes, Allan C; Patel, Rajni V
2015-05-01
The efficacy of catheter-based cardiac ablation procedures can be significantly improved if real-time information is available concerning contact forces between the catheter tip and cardiac tissue. However, the widely used ablation catheters are not equipped for force sensing. This paper proposes a technique for estimating the contact forces without direct force measurements by studying the changes in the shape of the deflectable distal section of a conventional 7-Fr catheter (henceforth called the "deflectable distal shaft," the "deflectable shaft," or the "shaft" of the catheter) in different loading situations. First, the shaft curvature when the tip is moving in free space is studied and based on that, a kinematic model for the deflectable shaft in free space is proposed. In the next step, the shaft shape is analyzed in the case where the tip is in contact with the environment, and it is shown that the curvature of the deflectable shaft provides useful information about the loading status of the catheter and can be used to define an index for determining the range of contact forces exerted by the ablation tip. Experiments with two different steerable ablation catheters show that the defined index can detect the range of applied contact forces correctly in more than 80% of the cases. Based on the proposed technique, a framework for obtaining contact force information by using the shaft curvature at a limited number of points along the deflectable shaft is constructed. The proposed kinematic model and the force estimation technique can be implemented together to describe the catheter's behavior before contact, detect tip/tissue contact, and determine the range of contact forces. This study proves that the flexibility of the catheter's distal shaft provides a means of estimating the force exerted on tissue by the ablation tip.
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Ma, Xiaoyue; Niezgoda, Michael; Blanton, Jesse D; Recuenco, Sergio; Rupprecht, Charles E
2012-08-03
Two major techniques are currently used to estimate rabies virus antibody values: neutralization assays, such as the rapid fluorescent focus inhibition test (RFFIT), and enzyme-linked immunosorbent assays (ELISAs). The RFFIT is considered the gold standard assay and has been used to assess the titer of rabies virus neutralizing antibodies for more than three decades. In the late 1970s, ELISA began to be used to estimate the level of rabies virus antibody and has recently been used by some laboratories as an alternate screening test for animal sera. Although the ELISA appears simpler, safer and more efficient, the assay is less sensitive in detecting low values of rabies virus neutralizing antibodies than neutralization tests. This study was designed to evaluate a new ELISA-based method for detecting rabies virus binding antibody. This new technique uses electro-chemi-luminescence labels and carbon electrode plates to detect binding events. In this comparative study, the RFFIT and the new ELISA-based technique were used to evaluate the level of rabies virus antibodies in human and animal serum samples. By using a conservative approximation of 0.15 IU/ml as a cutoff point, the new ELISA-based technique demonstrated a sensitivity of 100% and a specificity of 95% for human samples and for experimental animal samples. The sensitivity and specificity for field animal samples was 96% and 95%, respectively. The preliminary results from this study appear promising and demonstrate a higher sensitivity than traditional ELISA methods. Published by Elsevier Ltd.
Estimation of Alpine Skier Posture Using Machine Learning Techniques
Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej
2014-01-01
High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2018-05-01
The dynamics of the railway vehicles is strongly influenced by the interaction between the wheel and the rail. This kind of contact is affected by several conditioning factors such as vehicle speed, wear, adhesion level and, moreover, it is nonlinear. As a consequence, the modelling and the observation of this kind of phenomenon are complex tasks but, at the same time, they constitute a fundamental step for the estimation of the adhesion level or for the vehicle condition monitoring. This paper presents a novel technique for the real time estimation of the wheel-rail contact forces based on an estimator design model that takes into account the nonlinearities of the interaction by means of a fitting model functional to reproduce the contact mechanics in a wide range of slip and to be easily integrated in a complete model based estimator for railway vehicle.
O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.
2016-01-01
The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of objectively estimating mass and body segment properties of extinct species using whole articulated skeletons. PMID:26788418
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Denys Yemshanov; Frank H. Koch; Mark Ducey; Klaus Koehler
2013-01-01
Geographic mapping of risks is a useful analytical step in ecological risk assessments and in particular, in analyses aimed to estimate risks associated with introductions of invasive organisms. In this paper, we approach invasive species risk mapping as a portfolio allocation problem and apply techniques from decision theory to build an invasion risk map that combines...
Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar
NASA Astrophysics Data System (ADS)
Lottman, Brian Todd
1998-09-01
This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.
Real-time estimation of ionospheric delay using GPS measurements
NASA Astrophysics Data System (ADS)
Lin, Lao-Sheng
1997-12-01
When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)
Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E
2017-10-01
In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.
Li, Y; Chappell, A; Nyamdavaa, B; Yu, H; Davaasuren, D; Zoljargal, K
2015-03-01
The (137)Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many (137)Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of (137)Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954-2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate (137)Cs-derived net soil redistribution across scales of variation. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
CMB EB and TB cross-spectrum estimation via pseudospectrum techniques
NASA Astrophysics Data System (ADS)
Grain, J.; Tristram, M.; Stompor, R.
2012-10-01
We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.
11 CFR 9036.4 - Commission review of submissions.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., in conducting its review, may utilize statistical sampling techniques. Based on the results of its... nonmatchable and the reason that it is not matchable; or if statistical sampling is used, the estimated amount...
On a Formal Tool for Reasoning About Flight Software Cost Analysis
NASA Technical Reports Server (NTRS)
Spagnuolo, John N., Jr.; Stukes, Sherry A.
2013-01-01
A report focuses on the development of flight software (FSW) cost estimates for 16 Discovery-class missions at JPL. The techniques and procedures developed enabled streamlining of the FSW analysis process, and provided instantaneous confirmation that the data and processes used for these estimates were consistent across all missions. The research provides direction as to how to build a prototype rule-based system for FSW cost estimation that would provide (1) FSW cost estimates, (2) explanation of how the estimates were arrived at, (3) mapping of costs, (4) mathematical trend charts with explanations of why the trends are what they are, (5) tables with ancillary FSW data of interest to analysts, (6) a facility for expert modification/enhancement of the rules, and (7) a basis for conceptually convenient expansion into more complex, useful, and general rule-based systems.