An Introduction to Goodness of Fit for PMU Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riepnieks, Artis; Kirkham, Harold
2017-10-01
New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable evenmore » with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.« less
Radar modulation classification using time-frequency representation and nonlinear regression
NASA Astrophysics Data System (ADS)
De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric
1999-09-01
In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.
Lim, Chul-Hyun; Kim, Won Chul; Kim, Jin Soo; Cho, Yu Kyung; Park, Jae Myung; Lee, In Seok; Kim, Sang Woo; Choi, Kyu Yong; Chung, In-Sik
2012-01-01
Background/Aims It is believed that disposable biopsy forceps are more costly than reusable biopsy forceps. In this study, we evaluated performance and cost of disposable forceps versus reusable forceps in esophagogastroduodenoscopic biopsy. Methods Between October 2009 and July 2010, we enrolled 200 patients undergoing esophagogastroduodenoscopic biopsy at Seoul St. Mary's Hospital. Biopsies were performed with 100 disposable or 5 reusable forceps by random assignment. Seventy-five additional patients were studied to estimate durability of reusable forceps. The assisting nurses estimated the performance of the forceps. The evaluation of costs included purchase prices and reprocessing costs. The adequacy of the sample was estimated according to the diameter of the obtained tissue. Results Performance of disposable forceps was estimated as excellent in 97.0%, good in 2.0% and adequate in 1.0%. Reusable forceps were estimated as excellent in 36.0%, good in 36.0%, adequate in 25.1% and inadequate in 2.9%. The performance of reusable forceps declined with the number of uses. The reprocessing cost of reusable forceps for one biopsy session was calculated as ₩8,021. The adequacy of the sample was excellent for both forceps. Conclusions Disposable forceps showed excellent performance. Considering the reprocessing costs of reusable forceps, usage of disposable forceps with a low price should be considered. PMID:22741133
Evaluation of the MV (CAPON) Coherent Doppler Lidar Velocity Estimator
NASA Technical Reports Server (NTRS)
Lottman, B.; Frehlich, R.
1997-01-01
The performance of the CAPON velocity estimator for coherent Doppler lidar is determined for typical space-based and ground-based parameter regimes. Optimal input parameters for the algorithm were determined for each regime. For weak signals, performance is described by the standard deviation of the good estimates and the fraction of outliers. For strong signals, the fraction of outliers is zero. Numerical effort was also determined.
Design of Supersonic Transport Flap Systems for Thrust Recovery at Subsonic Speeds
NASA Technical Reports Server (NTRS)
Mann, Michael J.; Carlson, Harry W.; Domack, Christopher S.
1999-01-01
A study of the subsonic aerodynamics of hinged flap systems for supersonic cruise commercial aircraft has been conducted using linear attached-flow theory that has been modified to include an estimate of attainable leading edge thrust and an approximate representation of vortex forces. Comparisons of theoretical predictions with experimental results show that the theory gives a reasonably good and generally conservative estimate of the performance of an efficient flap system and provides a good estimate of the leading and trailing-edge deflection angles necessary for optimum performance. A substantial reduction in the area of the inboard region of the leading edge flap has only a minor effect on the performance and the optimum deflection angles. Changes in the size of the outboard leading-edge flap show that performance is greatest when this flap has a chord equal to approximately 30 percent of the wing chord. A study was also made of the performance of various combinations of individual leading and trailing-edge flaps, and the results show that aerodynamic efficiencies as high as 85 percent of full suction are predicted.
Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.
2018-02-01
In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.
Fast and unbiased estimator of the time-dependent Hurst exponent.
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
Fast and unbiased estimator of the time-dependent Hurst exponent
NASA Astrophysics Data System (ADS)
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
NASA Technical Reports Server (NTRS)
Gay, Robert S.; Holt, Greg N.; Zanetti, Renato
2016-01-01
This paper details the post-flight navigation performance assessment of the Orion Exploration Flight Test-1 (EFT-1). Results of each flight phase are presented: Ground Align, Ascent, Orbit, and Entry Descent and Landing. This study examines the on-board Kalman Filter uncertainty along with state deviations relative to the Best Estimated Trajectory (BET). Overall the results show that the Orion Navigation System performed as well or better than expected. Specifically, the Global Positioning System (GPS) measurement availability was significantly better than anticipated at high altitudes. In addition, attitude estimation via processing GPS measurements along with Inertial Measurement Unit (IMU) data performed very well and maintained good attitude throughout the mission.
NAVO MSRC Navigator. Spring 2001
2001-01-01
preparations for UGC 2001 are almost complete. This year’s conference promises to be a good one, afford- ing us the opportunity to extend some Gulf Coast...cur- rent market pricing, and other rea- sonable estimates would not signifi- cantly alter the predicted trends. The performance model1 estimates CPU
Near field Rayleigh wave on soft porous layers.
Geebelen, N; Boeckx, L; Vermeir, G; Lauriks, W; Allard, J F; Dazel, O
2008-03-01
Simulations performed for a typical semi-infinite reticulated plastic foam saturated by air show that, at distances less than three Rayleigh wavelengths from the area of mechanical excitation by a circular source, the normal frame velocity is close to the Rayleigh pole contribution. Simulated measurements show that a good order of magnitude estimate of the phase speed and damping can be obtained at small distances from the source. Simulations are also performed for layers of finite thickness, where the phase velocity and damping depend on frequency. They indicate that the normal frame velocity at small distances from the source is always close to the Rayleigh pole contribution and that a good order of magnitude estimate of the phase speed of the Rayleigh wave can be obtained at small distances from the source. Furthermore, simulations show that precise measurements of the damping of the Rayleigh wave need larger distances. Measurements performed on a layer of finite thickness confirm these trends.
CDGPS-Based Relative Navigation for Multiple Spacecraft
NASA Technical Reports Server (NTRS)
Mitchell, Megan Leigh
2004-01-01
This thesis investigates the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters for formation flying spacecraft. This work analyzes the relationship between the Extended Kalman Filter (EKF) design parameters and the resulting estimation accuracies, and in particular, the effect of the process and measurement noises on the semimajor axis error. This analysis clearly demonstrates that CDGPS-based relative navigation Kalman filters yield good estimation performance without satisfying the strong correlation property that previous work had associated with "good" navigation filters. Several examples are presented to show that the Kalman filter can be forced to create solutions with stronger correlations, but these always result in larger semimajor axis errors. These linear and nonlinear simulations also demonstrated the crucial role of the process noise in determining the semimajor axis knowledge. More sophisticated nonlinear models were included to reduce the propagation error in the estimator, but for long time steps and large separations, the EKF, which only uses a linearized covariance propagation, yielded very poor performance. In contrast, the CDGPS-based Unscented Kalman relative navigation Filter (UKF) handled the dynamic and measurement nonlinearities much better and yielded far superior performance than the EKF. The UKF produced good estimates for scenarios with long baselines and time steps for which the EKF would diverge rapidly. A hardware-in-the-loop testbed that is compatible with the Spirent Simulator at NASA GSFC was developed to provide a very flexible and robust capability for demonstrating CDGPS technologies in closed-loop. This extended previous work to implement the decentralized relative navigation algorithms in real time.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Evaluating Principal Surrogate Markers in Vaccine Trials in the Presence of Multiphase Sampling
Huang, Ying
2017-01-01
Summary This paper focuses on the evaluation of vaccine-induced immune responses as principal surrogate markers for predicting a given vaccine’s effect on the clinical endpoint of interest. To address the problem of missing potential outcomes under the principal surrogate framework, we can utilize baseline predictors of the immune biomarker(s) or vaccinate uninfected placebo recipients at the end of the trial and measure their immune biomarkers. Examples of good baseline predictors are baseline immune responses when subjects enrolled in the trial have been previously exposed to the same antigen, as in our motivating application of the Zostavax Efficacy and Safety Trial (ZEST). However, laboratory assays of these baseline predictors are expensive and therefore their subsampling among participants is commonly performed. In this paper we develop a methodology for estimating principal surrogate values in the presence of baseline predictor subsampling. Under a multiphase sampling framework, we propose a semiparametric pseudo-score estimator based on conditional likelihood and also develop several alternative semiparametric pseudo-score or estimated likelihood estimators. We derive corresponding asymptotic theories and analytic variance formulas for these estimators. Through extensive numeric studies, we demonstrate good finite sample performance of these estimators and the efficiency advantage of the proposed pseudo-score estimator in various sampling schemes. We illustrate the application of our proposed estimators using data from an immune biomarker study nested within the ZEST trial. PMID:28653408
ERIC Educational Resources Information Center
Wind, Stefanie A.; Engelhard, George, Jr.; Wesolowski, Brian
2016-01-01
When good model-data fit is observed, the Many-Facet Rasch (MFR) model acts as a linking and equating model that can be used to estimate student achievement, item difficulties, and rater severity on the same linear continuum. Given sufficient connectivity among the facets, the MFR model provides estimates of student achievement that are equated to…
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Spectrum-based estimators of the bivariate Hurst exponent
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav
2014-12-01
We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X -Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
K. Novick; J. Walker; W.S. Chan; A. Schmidt; C. Sobek; J.M. Vose
2013-01-01
A new class of enclosed path gas analyzers suitable for eddy covariance applications combines the advantages of traditional closed-path systems (small density corrections, good performance in poor weather) and open-path systems (good spectral response, low power requirements), and permits estimates of instantaneous gas mixing ratio. Here, the extent to which these...
Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J
2017-01-01
Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.
Terakado, Shingo; Glass, Thomas R; Sasaki, Kazuhiro; Ohmura, Naoya
2014-01-01
A simple new model for estimating the screening performance (false positive and false negative rates) of a given test for a specific sample population is presented. The model is shown to give good results on a test population, and is used to estimate the performance on a sampled population. Using the model developed in conjunction with regulatory requirements and the relative costs of the confirmatory and screening tests allows evaluation of the screening test's utility in terms of cost savings. Testers can use the methods developed to estimate the utility of a screening program using available screening tests with their own sample populations.
How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?
NASA Astrophysics Data System (ADS)
Lin, Zesen; Fang, Guanwen; Kong, Xu
2016-12-01
Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
Kempers, Jari; Ketting, Evert; Lesco, Galina
2014-07-21
Youth-friendly sexual and reproductive health services (YFHS) have high priority in many countries. Yet, little is known about the cost and cost-effectiveness of good quality YFHS in resource limited settings. This paper analyses retrospectively costs and potential cost-effectiveness of four well performing youth-friendly health centres (YFHC) in Moldova. This study assesses: (1) what were the costs of YFHSs at centre level, (2) how much would scaling-up to a national good quality YFHS programme cost, and (3) was the programme potentially cost-effective? Four well performing YFHCs were selected for the study. YFHS costs were analysed per centre, funding source, service and person reached. The costing results were extrapolated to estimate cost of a good quality national YFHS programme in Moldova. A threshold analysis was carried out to estimate the required impact level for the YFHSs to break-even (become cost saving). Average annual cost of a well performing YFHC was USD 26,000 in 2011. 58% was financed by the National Health Insurance Company and the rest by external donors (42%). Personnel salaries were the largest expense category (47%). The annual implementation costs of a good quality YFHSs in all 38 YFHCs of Moldova were estimated to be USD 1.0 million. The results of the threshold analysis indicate that the annual break-even impact points in a YFHC for: 1) STI services would be >364 averted STIs, 2) early pregnancy and contraceptive services >178 averted unwanted pregnancies, and 3) HIV services only >0.65 averted new HIV infections. The costing results highlight the following: 1) significant additional resources would be required for implementation of a good quality national YFHS programme, 2) the four well performing YFHCs rely heavily on external funding (42%), 3) which raises questions about financial sustainability of the programme. At the same time results of the threshold analysis are encouraging. The result suggest that, together the three SRH components (STI, early pregnancy and contraception, and HIV) are potentially cost saving. High cost savings resulting from averted lifetime treatment cost of HIV infected persons are likely to off-set the costs of STIs and unwanted pregnancies.
Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei
2015-12-01
The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.
Hot water from the sun: a consumer guide to solar water heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
McPherson, Beth
2005-02-15
The following topics are discussed: how solar water heaters work, making good use of the sun, estimating costs and savings, choosing the right dealer/installer, choosing the right system, warranties and contracts, getting a good installation, and living with your solar energy system. The appendices discuss system performance and durability, and provide sources of additional information on solar energy and its applications. (MHR)
Modeling study of air pollution due to the manufacture of export goods in China's Pearl River Delta.
Streets, David G; Yu, Carolyne; Bergin, Michael H; Wang, Xuemei; Carmichael, Gregory R
2006-04-01
The Pearl River Delta is a major manufacturing region on the south coast of China that produces more than dollar 100 billion of goods annually for export to North America, Europe, and other parts of Asia. Considerable air pollution is caused by the manufacturing industries themselves and by the power plants, trucks, and ships that support them. We estimate that 10-40% of emissions of primary SO2, NO(x), RSP, and VOC in the region are caused by export-related activities. Using the STEM-2K1 atmospheric transport model, we estimate that these emissions contribute 5-30% of the ambient concentrations of SO2, NO(x), NO(z), and VOC in the region. One reason that the exported goods are cheap and therefore attractive to consumers in developed countries is that emission controls are lacking or of low performance. We estimate that state-of-the-art controls could be installed at an annualized cost of dollar 0.3-3 billion, representing 0.3-3% of the value of the goods produced. We conclude that mitigation measures could be adopted without seriously affecting the prices of exported goods and would achieve considerable human health and other benefits in the form of reduced air pollutant concentrations in densely populated urban areas.
Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis
2014-01-01
When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material. PMID:24742008
Radar cross section models for limited aspect angle windows
NASA Astrophysics Data System (ADS)
Robinson, Mark C.
1992-12-01
This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.
Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P
2018-01-01
Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
Single snapshot DOA estimation
NASA Astrophysics Data System (ADS)
Häcker, P.; Yang, B.
2010-10-01
In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Eckert, W. T.; Mort, K. W.; Jope, J.
1976-01-01
General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.
Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra
2013-03-01
Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
ERIC Educational Resources Information Center
Warkentien, Siri; Silver, David
2016-01-01
Public schools with impressive records of serving lower-performing students are often overlooked because their average test scores, even when students are growing quickly, are lower than scores in schools that serve higher-performing students. Schools may appear to be doing poorly either because baseline achievement is not easily accounted for or…
Performance Prediction of Constrained Waveform Design for Adaptive Radar
2016-11-01
Kullback - Leibler divergence. χ2 Goodness - of - Fit Test We compute the estimated CDF for both models with 10000 MC trials. For Model 1 we observed a p-value of ...was clearly similar in its physical attributes, but the measures used , ( Kullback - Leibler , Chi-Square Test and the trace of the covariance) showed...models goodness - of - fit we look at three measures (1) χ2- Test (2) Trace of the inverse
Chao, Anne; Chiu, Chun-Huo; Colwell, Robert K; Magnago, Luiz Fernando S; Chazdon, Robin L; Gotelli, Nicholas J
2017-11-01
Estimating the species, phylogenetic, and functional diversity of a community is challenging because rare species are often undetected, even with intensive sampling. The Good-Turing frequency formula, originally developed for cryptography, estimates in an ecological context the true frequencies of rare species in a single assemblage based on an incomplete sample of individuals. Until now, this formula has never been used to estimate undetected species, phylogenetic, and functional diversity. Here, we first generalize the Good-Turing formula to incomplete sampling of two assemblages. The original formula and its two-assemblage generalization provide a novel and unified approach to notation, terminology, and estimation of undetected biological diversity. For species richness, the Good-Turing framework offers an intuitive way to derive the non-parametric estimators of the undetected species richness in a single assemblage, and of the undetected species shared between two assemblages. For phylogenetic diversity, the unified approach leads to an estimator of the undetected Faith's phylogenetic diversity (PD, the total length of undetected branches of a phylogenetic tree connecting all species), as well as a new estimator of undetected PD shared between two phylogenetic trees. For functional diversity based on species traits, the unified approach yields a new estimator of undetected Walker et al.'s functional attribute diversity (FAD, the total species-pairwise functional distance) in a single assemblage, as well as a new estimator of undetected FAD shared between two assemblages. Although some of the resulting estimators have been previously published (but derived with traditional mathematical inequalities), all taxonomic, phylogenetic, and functional diversity estimators are now derived under the same framework. All the derived estimators are theoretically lower bounds of the corresponding undetected diversities; our approach reveals the sufficient conditions under which the estimators are nearly unbiased, thus offering new insights. Simulation results are reported to numerically verify the performance of the derived estimators. We illustrate all estimators and assess their sampling uncertainty with an empirical dataset for Brazilian rain forest trees. These estimators should be widely applicable to many current problems in ecology, such as the effects of climate change on spatial and temporal beta diversity and the contribution of trait diversity to ecosystem multi-functionality. © 2017 by the Ecological Society of America.
Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists.
Weller, J M; Bloch, M; Young, S; Maze, M; Oyesola, S; Wyner, J; Dob, D; Haire, K; Durbridge, J; Walker, T; Newble, D
2003-01-01
There is increasing emphasis on performance-based assessment of clinical competence. The High Fidelity Patient Simulator (HPS) may be useful for assessment of clinical practice in anaesthesia, but needs formal evaluation of validity, reliability, feasibility and effect on learning. We set out to assess the reliability of a global rating scale for scoring simulator performance in crisis management. Using a global rating scale, three judges independently rated videotapes of anaesthetists in simulated crises in the operating theatre. Five anaesthetists then independently rated subsets of these videotapes. There was good agreement between raters for medical management, behavioural attributes and overall performance. Agreement was high for both the initial judges and the five additional raters. Using a global scale to assess simulator performance, we found good inter-rater reliability for scoring performance in a crisis. We estimate that two judges should provide a reliable assessment. High fidelity simulation should be studied further for assessing clinical performance.
NASA Astrophysics Data System (ADS)
Makungo, Rachel; Odiyo, John O.
2017-08-01
This study was focused on testing the ability of a coupled linear and non-linear system identification model in estimating groundwater levels. System identification provides an alternative approach for estimating groundwater levels in areas that lack data required by physically-based models. It also overcomes the limitations of physically-based models due to approximations, assumptions and simplifications. Daily groundwater levels for 4 boreholes, rainfall and evaporation data covering the period 2005-2014 were used in the study. Seventy and thirty percent of the data were used to calibrate and validate the model, respectively. Correlation coefficient (R), coefficient of determination (R2), root mean square error (RMSE), percent bias (PBIAS), Nash Sutcliffe coefficient of efficiency (NSE) and graphical fits were used to evaluate the model performance. Values for R, R2, RMSE, PBIAS and NSE ranged from 0.8 to 0.99, 0.63 to 0.99, 0.01-2.06 m, -7.18 to 1.16 and 0.68 to 0.99, respectively. Comparisons of observed and simulated groundwater levels for calibration and validation runs showed close agreements. The model performance mostly varied from satisfactory, good, very good and excellent. Thus, the model is able to estimate groundwater levels. The calibrated models can reasonably capture description between input and output variables and can, thus be used to estimate long term groundwater levels.
2014-01-01
Background Youth-friendly sexual and reproductive health services (YFHS) have high priority in many countries. Yet, little is known about the cost and cost-effectiveness of good quality YFHS in resource limited settings. This paper analyses retrospectively costs and potential cost-effectiveness of four well performing youth-friendly health centres (YFHC) in Moldova. This study assesses: (1) what were the costs of YFHSs at centre level, (2) how much would scaling-up to a national good quality YFHS programme cost, and (3) was the programme potentially cost-effective? Methods Four well performing YFHCs were selected for the study. YFHS costs were analysed per centre, funding source, service and person reached. The costing results were extrapolated to estimate cost of a good quality national YFHS programme in Moldova. A threshold analysis was carried out to estimate the required impact level for the YFHSs to break-even (become cost saving). Results Average annual cost of a well performing YFHC was USD 26,000 in 2011. 58% was financed by the National Health Insurance Company and the rest by external donors (42%). Personnel salaries were the largest expense category (47%). The annual implementation costs of a good quality YFHSs in all 38 YFHCs of Moldova were estimated to be USD 1.0 million. The results of the threshold analysis indicate that the annual break-even impact points in a YFHC for: 1) STI services would be >364 averted STIs, 2) early pregnancy and contraceptive services >178 averted unwanted pregnancies, and 3) HIV services only >0.65 averted new HIV infections. Conclusions The costing results highlight the following: 1) significant additional resources would be required for implementation of a good quality national YFHS programme, 2) the four well performing YFHCs rely heavily on external funding (42%), 3) which raises questions about financial sustainability of the programme. At the same time results of the threshold analysis are encouraging. The result suggest that, together the three SRH components (STI, early pregnancy and contraception, and HIV) are potentially cost saving. High cost savings resulting from averted lifetime treatment cost of HIV infected persons are likely to off-set the costs of STIs and unwanted pregnancies. PMID:25047074
A complex valued radial basis function network for equalization of fast time varying channels.
Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R
1999-01-01
This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Robust neural network with applications to credit portfolio data analysis.
Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun
2010-01-01
In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-11-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-01-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007–08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications. PMID:26273127
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery
2016-10-01
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
Tracking of electrochemical impedance of batteries
NASA Astrophysics Data System (ADS)
Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.
2016-04-01
This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.
NASA Astrophysics Data System (ADS)
Zhou, Xianfeng; Huang, Wenjiang; Kong, Weiping; Ye, Huichun; Dong, Yingying; Casa, Raffaele
2017-05-01
Leaf carotenoids content (LCar) is an important indicator of plant physiological status. Accurate estimation of LCar provides valuable insight into early detection of stress in vegetation. With spectroscopy techniques, a semi-empirical approach based on spectral indices was extensively used for carotenoids content estimation. However, established spectral indices for carotenoids that generally rely on limited measured data, might lack predictive accuracy for carotenoids estimation in various species and at different growth stages. In this study, we propose a new carotenoid index (CARI) for LCar assessment based on a large synthetic dataset simulated from the leaf radiative transfer model PROSPECT-5, and evaluate its capability with both simulated data from PROSPECT-5 and 4SAIL and extensive experimental datasets: the ANGERS dataset and experimental data acquired in field experiments in China in 2004. Results show that CARI was the index most linearly correlated with carotenoids content at the leaf level using a synthetic dataset (R2 = 0.943, RMSE = 1.196 μg/cm2), compared with published spectral indices. Cross-validation results with CARI using ANGERS data achieved quite an accurate estimation (R2 = 0.545, RMSE = 3.413 μg/cm2), though the RBRI performed as the best index (R2 = 0.727, RMSE = 2.640 μg/cm2). CARI also showed good accuracy (R2 = 0.639, RMSE = 1.520 μg/cm2) for LCar assessment with leaf level field survey data, though PRI performed better (R2 = 0.710, RMSE = 1.369 μg/cm2). Whereas RBRI, PRI and other assessed spectral indices showed a good performance for a given dataset, overall their estimation accuracy was not consistent across all datasets used in this study. Conversely CARI was more robust showing good results in all datasets. Further assessment of LCar with simulated and measured canopy reflectance data indicated that CARI might not be very sensitive to LCar changes at low leaf area index (LAI) value, and in these conditions soil moisture influenced the LCar retrieval accuracy.
Trust Measurement using Multimodal Behavioral Analysis and Uncertainty Aware Trust Calibration
2018-01-05
to estimate their performance based on their estimation on all prior trials. In the meanwhile via comparing the decisions of participants with the...it is easier compared with situations when more trials have been done. It should be noted that if a participant is good at memorizing the previous...them. The proposed study, being quantitative and explorative, are expected to reveal a number of findings that benefit interaction system design and
Application of remote sensing in estimating evapotranspiration in the Platte river basin
NASA Technical Reports Server (NTRS)
Blad, B. L.; Rosenberg, N. J.
1976-01-01
A 'resistance model' and a mass transport model for estimating evapotranspiration (ET) were tested on large fields of naturally subirrigated alfalfa. Both models make use of crop canopy temperature data. Temperature data were obtained with an IR thermometer and with leaf thermocouples. A Bowen ratio-energy balance (BREB) model, adjusted to account for underestimation of ET during periods of strong sensible heat advection, was used as the standard against which the resistance and mass transport models were compared. Daily estimates by the resistance model were within 10% of estimates made by the BREB model. Daily estimates by the mass transport model did not agree quite as well. Performance was good on clear and cloudy days and also during periods of non-advection and strong advection of sensible heat. The performance of the mass transport and resistance models was less satisfactory for estimation of fluxes of latent heat for short term periods. Both models tended to overestimate at low LE fluxes.
Robust estimation for class averaging in cryo-EM Single Particle Reconstruction.
Huang, Chenxi; Tagare, Hemant D
2014-01-01
Single Particle Reconstruction (SPR) for Cryogenic Electron Microscopy (cryo-EM) aligns and averages the images extracted from micrographs to improve the Signal-to-Noise ratio (SNR). Outliers compromise the fidelity of the averaging. We propose a robust cross-correlation-like w-estimator for combating the effect of outliers on the average images in cryo-EM. The estimator accounts for the natural variation of signal contrast among the images and eliminates the need for a threshold for outlier rejection. We show that the influence function of our estimator is asymptotically bounded. Evaluations of the estimator on simulated and real cryo-EM images show good performance in the presence of outliers.
12 CFR 1024.7 - Good faith estimate.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 8 2014-01-01 2014-01-01 false Good faith estimate. 1024.7 Section 1024.7... (REGULATION X) Mortgage Settlement and Escrow Accounts § 1024.7 Good faith estimate. (a) Lender to provide. (1..., 2014. For the convenience of the user, the revised text is set forth as follows: § 1024.7 Good faith...
Estimation and prediction under local volatility jump-diffusion model
NASA Astrophysics Data System (ADS)
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Parameter estimation for chaotic systems using improved bird swarm algorithm
NASA Astrophysics Data System (ADS)
Xu, Chuangbiao; Yang, Renhuan
2017-12-01
Parameter estimation of chaotic systems is an important problem in nonlinear science and has aroused increasing interest of many research fields, which can be basically reduced to a multidimensional optimization problem. In this paper, an improved boundary bird swarm algorithm is used to estimate the parameters of chaotic systems. This algorithm can combine the good global convergence and robustness of the bird swarm algorithm and the exploitation capability of improved boundary learning strategy. Experiments are conducted on the Lorenz system and the coupling motor system. Numerical simulation results reveal the effectiveness and with desirable performance of IBBSA for parameter estimation of chaotic systems.
Optimal prediction of the number of unseen species.
Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong
2016-11-22
Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42-58], uses n samples to predict the number U of hitherto unseen species that would be observed if [Formula: see text] new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45-63] constructed an intriguing estimator that predicts U for all [Formula: see text] Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435-447] proposed a modification that empirically predicts U even for some [Formula: see text], but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to [Formula: see text] We also show that this range is the best possible and that the estimator's mean-square error is near optimal for any t Our approach yields a provable guarantee for the Efron-Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product.
Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.
Borah, Deva K; Voelz, David G
2007-08-10
The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.
SU-F-T-479: Estimation of the Accuracy in Respiratory-Gated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosawa, T; Miyakawa, S; Sato, M
Purpose: Irregular respiratory patterns affects dose outputs in respiratorygated radiotherapy and there is no commercially available quality assurance (QA) system for it. We designed and developed a patient specific QA system for respiratory-gated radiotherapy to estimate irradiated output. Methods: Our in-house QA system for gating was composed of a personal computer with the USB-FSIO electronic circuit connecting to the linear accelerator (ONCOR-K, Toshiba Medical Systems). The linac implements a respiratory gating system (AZ-733V, Anzai Medical). During the beam was on, 4.2 V square-wave pulses were continually sent to the system. Our system can receive and count the pulses. At first,more » our system and an oscilloscope were compared to check the performance of our system. Next, basic estimation models were generated when ionization-chamber measurements were performed in gating using regular sinusoidal wave patterns (2.0, 2.5, 4.0, 8.0, 15 sec/cycle). During gated irradiation with the regular patterns, the number of the pulses per one gating window was measured using our system. Correlation between the number of the pulses per one gating and dose per the gating window were assessed to generate the estimation model. Finally, two irregular respiratory patterns were created and the accuracy of the estimation was evaluated. Results: Compared to the oscilloscope, our system worked similarly. The basic models were generated with the accuracy within 0.1%. The results of the gated irradiations with two irregular respiratory patterns show good agreement within 0.4% estimation accuracy. Conclusion: Our developed system shows good estimation for even irregular respiration patterns. The system would be a useful tool to verify the output for respiratory-gated radiotherapy.« less
Postflight analysis of the EVCS-LM communications link for the Apollo 15 mission
NASA Technical Reports Server (NTRS)
Royston, C. L., Jr.; Eggers, D. S.
1972-01-01
Data from the Apollo 15 mission were used to compare the actual performance of the EVCS to LM communications link with the preflight performance predictions. Based on the results of the analysis, the following conclusions were made: (1) The radio transmission loss data show good correlation with predictions during periods when the radio line of sight was obscured. (2) The technique of predicting shadow losses due to obstacles in the radio line of sight provides a good estimate of the actual shadowing loss. (3) When the transmitter was on an upslope, the radio transmission loss approached the free space loss values as the line of sight to the LM was regained.
This SmartWay Logistics 2.0.15 Tool is intended to help logistics companies estimate and assess their emission performance levels as well as their total emissions associated with goods movement in the U.S. freight rail, barge, air and t
Predictive Heterosis in Multibreed Evaluations Using Quantitative and Molecular Approaches
USDA-ARS?s Scientific Manuscript database
Heterosis is the extra genetic boost in performance obtained by crossing two cattle breeds. It is an important tool for increasing the efficiency of beef production. It is also important to adjust data used to calculate genetic evaluations for differences in heterosis. Good estimates of heterosis...
Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D
2016-01-01
Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.
Xia, Youshen; Wang, Jun
2015-07-01
This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.
12 CFR 1024.7 - Good faith estimate.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Good faith estimate. 1024.7 Section 1024.7 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION REAL ESTATE SETTLEMENT PROCEDURES ACT (REGULATION X) § 1024.7 Good faith estimate. (a) Lender to provide. (1) Except as otherwise provided in...
12 CFR 1024.7 - Good faith estimate.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Good faith estimate. 1024.7 Section 1024.7 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION REAL ESTATE SETTLEMENT PROCEDURES ACT (REGULATION X) § 1024.7 Good faith estimate. (a) Lender to provide. (1) Except as otherwise provided in...
24 CFR 3500.7 - Good faith estimate.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 5 2013-04-01 2013-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...
24 CFR 3500.7 - Good faith estimate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 5 2011-04-01 2011-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...
24 CFR 3500.7 - Good faith estimate.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 5 2014-04-01 2014-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...
24 CFR 3500.7 - Good faith estimate.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Weaver, Aaron S.
2003-01-01
Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
Ng, Juki; Rogosheske, John; Barker, Juliet; Weisdorf, Daniel; Jacobson, Pamala A
2006-06-01
Renal transplant patients with suboptimal mycophenolic acid (MPA) areas under the curves (AUCs) are at greater risk of acute rejection. In hematopoietic cell transplantation, a low MPA AUC is also associated with a higher incidence of acute graft versus host disease. Therefore, a limited sampling model was developed and validated to simultaneously estimate total and unbound MPA AUC0-12 in hematopoietic cell transplantation patients. Intensive pharmacokinetic sampling was performed at steady state between days 3 to 7 posttransplant in 73 adult subjects while receiving prophylactic mycophenolate mofetil 1 g per 12 hours orally or intravenously plus cyclosporine. Total and unbound MPA plasma concentrations were measured, and total and unbound AUC0-12 was determined using noncompartmental analysis. Regression analysis was then performed to build IV and PO, total and unbound AUC0-12 models from the first 34 subjects. The predictive performance of these models was tested in the next 39 subjects. Trough concentrations poorly estimate observed total and unbound AUC0-12 (r<0.48). A model with 3 concentrations (2-, 4-, and 6-hour post start of infusion) best estimated observed total and unbound AUC0-12 after IV dosing (r>0.99). Oral total and unbound AUC0-12 was more difficult to estimate and required at least 4 concentrations (0-, 1-, 2-, and 6-hour post dose) in the model (r>0.85). The predictive performance of the final models was good. Eighty-three percent of IV and 70% of PO AUC0-12 predictions fell within +/-20% of the observed values without significant bias. Trough MPA concentrations do not accurately describe MPA AUC0-12. Three intravenous (2-, 4-, 6-hour post start of infusion) or 4 oral (0-, 1-, 2-, and 6-hour post dose) MPA plasma concentrations measured over a 12-hour dosing interval will estimate the total and unbound AUC0-12 nearly as well as intensive pharmacokinetic sampling with good precision and low bias. This approach simplifies AUC0-12 targeting of MPA post hematopoietic cell transplantation.
Campagne, Carole Sylvie; Salles, Jean-Michel; Boissery, Pierre; Deter, Julie
2015-08-15
Posidonia oceanica is a marine angiosperm endemic from the Mediterranean. Despite their protection, its meadows are regressing. The economic valuation of ecosystem services (ES) assesses the contribution of ecosystems to human well-being and may provide local policy makers help in territorial development. To estimate the economic value of P. oceanica seagrass and the meadows that it forms to better account its presence in coastal development, identification and assessment of ES provided are first performed. Then goods and benefits (GB) and their economical values are estimated. In total, 25ES are identified and 7GB are economically evaluated. The economic value of GB provided by P. oceanica ranges between 25.3 million and 45.9 million€/year which means 283-513€/ha/year. Because of the lack of existing available data, only 7GB linked to 11/25ES have been estimated. Despite this overall undervaluation, this study offers a value for coastal development policies to take into account. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per; Hansen, Jens Munk; Lindskov Hansen, Kristoffer; Bachmann Nielsen, Michael; Jensen, Jørgen Arendt
2011-03-01
Spectral velocity estimation is considered the gold standard in medical ultrasound. Peak systole (PS), end diastole (ED), and resistive index (RI) are used clinically. Angle correction is performed using a flow angle set manually. With Transverse Oscillation (TO) velocity estimates the flow angle, peak systole (PSTO), end diastole (EDTO), and resistive index (RITO) are estimated. This study investigates if these clinical parameters are estimated equally good using spectral and TO data. The right common carotid arteries of three healthy volunteers were scanned longitudinally. Average TO flow angles and std were calculated { 52+/-18 ; 55+/-23 ; 60+/-16 }°. Spectral angles { 52 ; 56 ; 52 }° were obtained from the B-mode images. Obtained values are: PSTO { 76+/-15 ; 89+/-28 ; 77+/-7 } cm/s, spectral PS { 77 ; 110 ; 76 } cm/s, EDTO { 10+/-3 ; 14+/-8 ; 15+/-3 } cm/s, spectral ED { 18 ; 13 ; 20 } cm/s, RITO { 0.87+/-0.05 ; 0.79+/-0.21 ; 0.79+/-0.06 }, and spectral RI { 0.77 ; 0.88 ; 0.73 }. Vector angles are within +/-two std of the spectral angle. TO velocity estimates are within +/-three std of the spectral estimates. RITO are within +/-two std of the spectral estimates. Preliminary data indicates that the TO and spectral velocity estimates are equally good. With TO there is no manual angle setting and no flow angle limitation. TO velocity estimation can also automatically handle situations where the angle varies over the cardiac cycle. More detailed temporal and spatial vector estimates with diagnostic potential are available with the TO velocity estimation.
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
Optimal prediction of the number of unseen species
Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong
2016-01-01
Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42−58], uses n samples to predict the number U of hitherto unseen species that would be observed if t⋅n new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45−63] constructed an intriguing estimator that predicts U for all t≤1. Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435−447] proposed a modification that empirically predicts U even for some t>1, but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to t∝logn. We also show that this range is the best possible and that the estimator’s mean-square error is near optimal for any t. Our approach yields a provable guarantee for the Efron−Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product. PMID:27830649
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ege, Kerem; Roozen, N. B.; Leclère, Quentin; Rinaldi, Renaud G.
2018-07-01
In the context of aeronautics, automotive and construction applications, the design of light multilayer plates with optimized vibroacoustical damping and isolation performances remains a major industrial challenge and a hot topic of research. This paper focuses on the vibrational behavior of three-layered sandwich composite plates in a broad-band frequency range. Several aspects are studied through measurement techniques and analytical modelling of a steel/polymer/steel plate sandwich system. A contactless measurement of the velocity field of plates using a scanning laser vibrometer is performed, from which the equivalent single layer complex rigidity (apparent bending stiffness and apparent damping) in the mid/high frequency ranges is estimated. The results are combined with low/mid frequency estimations obtained with a high-resolution modal analysis method so that the frequency dependent equivalent Young's modulus and equivalent loss factor of the composite plate are identified for the whole [40 Hz-20 kHz] frequency band. The results are in very good agreement with an equivalent single layer analytical modelling based on wave propagation analysis (model of Guyader). The comparison with this model allows identifying the frequency dependent complex modulus of the polymer core layer through inverse resolution. Dynamical mechanical analysis measurements are also performed on the polymer layer alone and compared with the values obtained through the inverse method. Again, a good agreement between these two estimations over the broad-band frequency range demonstrates the validity of the approach.
An evolutionary morphological approach for software development cost estimation.
Araújo, Ricardo de A; Oliveira, Adriano L I; Soares, Sergio; Meira, Silvio
2012-08-01
In this work we present an evolutionary morphological approach to solve the software development cost estimation (SDCE) problem. The proposed approach consists of a hybrid artificial neuron based on framework of mathematical morphology (MM) with algebraic foundations in the complete lattice theory (CLT), referred to as dilation-erosion perceptron (DEP). Also, we present an evolutionary learning process, called DEP(MGA), using a modified genetic algorithm (MGA) to design the DEP model, because a drawback arises from the gradient estimation of morphological operators in the classical learning process of the DEP, since they are not differentiable in the usual way. Furthermore, an experimental analysis is conducted with the proposed model using five complex SDCE problems and three well-known performance metrics, demonstrating good performance of the DEP model to solve SDCE problems. Copyright © 2012 Elsevier Ltd. All rights reserved.
Bockman, Alexander; Fackler, Cameron; Xiang, Ning
2015-04-01
Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.
Lim, Changwon
2015-03-30
Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
24 CFR Appendix C to Part 3500 - Instructions for Completing Good Faith Estimate (GFE) Form
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Instructions for Completing Good Faith Estimate (GFE) Form C Appendix C to Part 3500 Housing and Urban Development Regulations Relating.... 3500, App. C Appendix C to Part 3500—Instructions for Completing Good Faith Estimate (GFE) Form The...
Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.
Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007
Sociopolitical and economic elements to explain the environmental performance of countries.
Almeida, Thiago Alexandre das Neves; García-Sánchez, Isabel-María
2017-01-01
The present research explains environmental performance using an ecological composite index as the dependent variable and focusing on two national dimensions: sociopolitical characteristics and economics. Environmental performance is measured using the Composite Index of Environmental Performance (CIEP) indicator proposed by García-Sánchez et al. (2015). The first model performs a factor analysis to aggregate the variables according to each analyzed dimension. In the second model, the estimation is run using only single variables. Both models are estimated using generalized least square estimation (GLS) using panel data from 152 countries and 6 years. The results show that sociopolitical factors and international trade have a positive effect on environmental performance. When the variables are separately analyzed, democracy and social policy have a positive effect on environmental performance while transport, infrastructure, consumption of goods, and tourism have a negative effect. Further observation is that the trade-off between importing and exporting countries overshadows the pollution caused by production. It was also observed that infrastructure has a negative coefficient for developing countries and positive for developed countries. The best performances are in the democratic and richer countries that are located in Europe, while the worst environmental performance is by the nondemocratic and the poorest countries, which are on the African continent.
Estimating trace-suspect match probabilities for singleton Y-STR haplotypes using coalescent theory.
Andersen, Mikkel Meyer; Caliebe, Amke; Jochens, Arne; Willuweit, Sascha; Krawczak, Michael
2013-02-01
Estimation of match probabilities for singleton haplotypes of lineage markers, i.e. for haplotypes observed only once in a reference database augmented by a suspect profile, is an important problem in forensic genetics. We compared the performance of four estimators of singleton match probabilities for Y-STRs, namely the count estimate, both with and without Brenner's so-called 'kappa correction', the surveying estimate, and a previously proposed, but rarely used, coalescent-based approach implemented in the BATWING software. Extensive simulation with BATWING of the underlying population history, haplotype evolution and subsequent database sampling revealed that the coalescent-based approach is characterized by lower bias and lower mean squared error than the uncorrected count estimator and the surveying estimator. Moreover, in contrast to the two count estimators, both the surveying and the coalescent-based approach exhibited a good correlation between the estimated and true match probabilities. However, although its overall performance is thus better than that of any other recognized method, the coalescent-based estimator is still computation-intense on the verge of general impracticability. Its application in forensic practice therefore will have to be limited to small reference databases, or to isolated cases of particular interest, until more powerful algorithms for coalescent simulation have become available. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1994-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the Special Sensor Microwave/Imager (SSM/I) for the preiod from July 1987 through December 1991. The monthly estimates were calibrated using measurements from a network of Pacific atoll rain gauges and compared to other satellite-based rainfall estimation techniques. Based on these monthly estimates, an analysis of the variability of large-scale features over intraseasonal to interannual timescales has been performed. While the major precipitation features as well as the seasonal variability distributions show good agreement with expected values, the presence of a moderately intense El Nino during 1986-87 and an intense La Nina during 1988-89 highlights this time period.
Innovative High-Performance Deposition Technology for Low-Cost Manufacturing of OLED Lighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamer, John; Scott, David
In this project, OLEDWorks developed and demonstrated the innovative high-performance deposition technology required to deliver dramatic reductions in the cost of manufacturing OLED lighting in production equipment. The current high manufacturing cost of OLED lighting is the most urgent barrier to its market acceptance. The new deposition technology delivers solutions to the two largest parts of the manufacturing cost problem – the expense per area of good product for organic materials and for the capital cost and depreciation of the equipment. Organic materials cost is the largest expense item in the bill of materials and is predicted to remain somore » through 2020. The high-performance deposition technology developed in this project, also known as the next generation source (NGS), increases material usage efficiency from 25% found in current Gen2 deposition technology to 60%. This improvement alone results in a reduction of approximately 25 USD/m 2 of good product in organic materials costs, independent of production volumes. Additionally, this innovative deposition technology reduces the total depreciation cost from the estimated value of approximately 780 USD/m 2 of good product for state-of-the-art G2 lines (at capacity, 5-year straight line depreciation) to 170 USD/m 2 of good product from the OLEDWorks production line.« less
Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Building Structures
Çelebi, Mehmet
1998-01-01
Several approaches are used to assess the performance of the built environment following an earthquake -- preliminary damage surveys conducted by professionals, detailed studies of individual structures, and statistical analyses of groups of structures. Reports of damage that are issued by many organizations immediately following an earthquake play a key role in directing subsequent detailed investigations. Detailed studies of individual structures and statistical analyses of groups of structures may be motivated by particularly good or bad performance during an earthquake. Beyond this, practicing engineers typically perform stress analyses to assess the performance of a particular structure to vibrational levels experienced during an earthquake. The levels may be determined from recorded or estimated ground motions; actual levels usually differ from design levels. If a structure has seismic instrumentation to record response data, the estimated and recorded response and behavior of the structure can be compared.
Evaluation of the PV energy production after 12-years of operating
NASA Astrophysics Data System (ADS)
Bouchakour, Salim; Arab, Amar Hadj; Abdeladim, Kamel; Boulahchiche, Saliha; Amrouche, Said Ould; Razagui, Abdelhak
2018-05-01
This paper presents a simple way to approximately evaluate the photovoltaic (PV) array performance degradation, the studied PV arrays are connected to the local electric grid at the Centre de Developpement des Energies Renouvelables (CDER) in Algiers, Algeria, since June 2004. The used PV module model takes in consideration the module temperature and the effective solar radiance, the electrical characteristics provided by the manufacturer data sheet and the evaluation of the performance coefficient. For the dynamic behavior we use the Linear Reoriented Coordinates Method (LRCM) to estimate the maximum power point (MPP). The performance coefficient is evaluated on the one hand under STC conditions to estimate the dc energy according to the manufacturer data. On the other hand, under real conditions using both the monitored data and the LM optimization algorithm, allowing a good degree of accuracy of estimated dc energy. The application of the developed modeling procedure to the analysis of the monitored data is expected to improve understanding and assessment of the PV performance degradation of the PV arrays after 12 years of operation.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.
1990-01-01
Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).
Good practices in free-energy calculations.
Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christophe
2010-08-19
As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in a wide range of research areas. Yet, the reliability of these calculations can often be improved significantly if a number of precepts, or good practices, are followed. Although the theory upon which these good practices rely has largely been known for many years, it is often overlooked or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. In this contribution, the current best practices for carrying out free-energy calculations using free energy perturbation and nonequilibrium work methods are discussed, demonstrating that at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. Monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway, and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision.
Comparison of GPS receiver DCB estimation methods using a GPS network
NASA Astrophysics Data System (ADS)
Choi, Byung-Kyu; Park, Jong-Uk; Min Roh, Kyoung; Lee, Sang-Jeong
2013-07-01
Two approaches for receiver differential code biases (DCB) estimation using the GPS data obtained from the Korean GPS network (KGN) in South Korea are suggested: the relative and single (absolute) methods. The relative method uses a GPS network, while the single method determines DCBs from a single station only. Their performance was assessed by comparing the receiver DCB values obtained from the relative method with those estimated by the single method. The daily averaged receiver DCBs obtained from the two different approaches showed good agreement for 7 days. The root mean square (RMS) value of those differences is 0.83 nanoseconds (ns). The standard deviation of the receiver DCBs estimated by the relative method was smaller than that of the single method. From these results, it is clear that the relative method can obtain more stable receiver DCBs compared with the single method over a short-term period. Additionally, the comparison between the receiver DCBs obtained by the Korea Astronomy and Space Science Institute (KASI) and those of the IGS Global Ionosphere Maps (GIM) showed a good agreement at 0.3 ns. As the accuracy of DCB values significantly affects the accuracy of ionospheric total electron content (TEC), more studies are needed to ensure the reliability and stability of the estimated receiver DCBs.
Estimation of the Continuous and Discontinuous Leverage Effects
Aït-Sahalia, Yacine; Fan, Jianqing; Laeven, Roger J. A.; Wang, Christina Dan; Yang, Xiye
2017-01-01
This paper examines the leverage effect, or the generally negative covariation between asset returns and their changes in volatility, under a general setup that allows the log-price and volatility processes to be Itô semimartingales. We decompose the leverage effect into continuous and discontinuous parts and develop statistical methods to estimate them. We establish the asymptotic properties of these estimators. We also extend our methods and results (for the continuous leverage) to the situation where there is market microstructure noise in the observed returns. We show in Monte Carlo simulations that our estimators have good finite sample performance. When applying our methods to real data, our empirical results provide convincing evidence of the presence of the two leverage effects, especially the discontinuous one. PMID:29606780
Dental cementum in age estimation: a polarized light and stereomicroscopic study.
Kasetty, Sowmya; Rammanohar, M; Raju Ragavendra, T
2010-05-01
Dental hard tissues are good candidates for age estimation as they are less destructive and procedures to determine age can be easily performed. Although cementum annulations and cementum thickness are important parameters in this regard, they are seldom used. This study was undertaken to review the methods, difficulties in execution of techniques, and accuracy of cementum thickness and annulations in estimating the age. Unstained and stained ground sections of tooth were used to measure cemental thickness and count cemental annulations based on which age was estimated and was compared with known age. Although there was positive relation between cemental thickness and annulations with age, only in 1-1.5% of cases, age could be predicted with accuracy.
Estimation of the Continuous and Discontinuous Leverage Effects.
Aït-Sahalia, Yacine; Fan, Jianqing; Laeven, Roger J A; Wang, Christina Dan; Yang, Xiye
2017-01-01
This paper examines the leverage effect, or the generally negative covariation between asset returns and their changes in volatility, under a general setup that allows the log-price and volatility processes to be Itô semimartingales. We decompose the leverage effect into continuous and discontinuous parts and develop statistical methods to estimate them. We establish the asymptotic properties of these estimators. We also extend our methods and results (for the continuous leverage) to the situation where there is market microstructure noise in the observed returns. We show in Monte Carlo simulations that our estimators have good finite sample performance. When applying our methods to real data, our empirical results provide convincing evidence of the presence of the two leverage effects, especially the discontinuous one.
Unresolved Galaxy Classifier for ESA/Gaia mission: Support Vector Machines approach
NASA Astrophysics Data System (ADS)
Bellas-Velidis, Ioannis; Kontizas, Mary; Dapergolas, Anastasios; Livanou, Evdokia; Kontizas, Evangelos; Karampelas, Antonios
A software package Unresolved Galaxy Classifier (UGC) is being developed for the ground-based pipeline of ESA's Gaia mission. It aims to provide an automated taxonomic classification and specific parameters estimation analyzing Gaia BP/RP instrument low-dispersion spectra of unresolved galaxies. The UGC algorithm is based on a supervised learning technique, the Support Vector Machines (SVM). The software is implemented in Java as two separate modules. An offline learning module provides functions for SVM-models training. Once trained, the set of models can be repeatedly applied to unknown galaxy spectra by the pipeline's application module. A library of galaxy models synthetic spectra, simulated for the BP/RP instrument, is used to train and test the modules. Science tests show a very good classification performance of UGC and relatively good regression performance, except for some of the parameters. Possible approaches to improve the performance are discussed.
Review of recycling performance indicators: a study on collection rate in Taiwan.
Wen, Lihchyi; Lin, Chun-Hsu; Lee, Soo-Cheol
2009-08-01
The Taiwan Environmental Protection Administration (Taiwan EPA) launched a national Extended Producer Responsibility (EPR) system after integrating eight private recycling organizations in 1998. After that, the environmental performance of the EPR system brought a lot of attention to policy makers. Many studies show positive environmental effects of the EPR system in Taiwan. However, there are controversial questions remained, such as whether the performance indicators used are the right choice to estimate the environmental effects of the recycling policy? Can those estimated results really reflect the performance of the system? This paper would therefore like to more accurately evaluate the performance indicators of the EPR system based on data observed over the past decade in Taiwan. In the process of evaluating the performance indicators, we have found that the collection rates for durable goods are often ignored in countries that pursue a zero waste policy. This may affect the actual recycling outcome and resource direction targeted by producers. However, in order for the collection rate to be adopted as a policy indicator, how to estimate the amounts of retired or waste products during a period is critical. In this paper, we estimate the collection rate for electrical and electronic waste by using the survival analysis and ownership data analysis approaches. We also provide a comparison of both approaches and put forward suggestions for directions in the future in solid waste management.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
2017-01-01
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements. PMID:28890604
Porru, Marcella; Özkan, Leyla
2017-08-30
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements.
Evaluation of four methods for estimating leaf area of isolated trees
P.J. Peper; E.G. McPherson
2003-01-01
The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...
Optimising boiler performance.
Mayoh, Paul
2009-01-01
Soaring fuel costs continue to put the squeeze on already tight health service budgets. Yet it is estimated that combining established good practice with improved technologies could save between 10% and 30% of fuel costs for boilers. Paul Mayoh, UK technical manager at Spirax Sarco, examines some of the practical measures that healthcare organisations can take to gain their share of these potential savings.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
Brown, Halley J; Andreason, Hope; Melling, Amy K; Imel, Zac E; Simon, Gregory E
2015-08-01
Retention, or its opposite, dropout, is a common metric of psychotherapy quality, but using it to assess provider performance can be problematic. Differences among providers in numbers of general dropouts, "good" dropouts (patients report positive treatment experiences and outcome), and "bad" dropouts (patients report negative treatment experiences and outcome) were evaluated. Patient records were paired with satisfaction surveys (N=3,054). Binomial mixed-effects models were used to examine differences among providers by dropout type. Thirty-four percent of treatment episodes resulted in dropout. Of these, 14% were bad dropouts and 27% were good dropouts. Providers accounted for approximately 17% of the variance in general dropout and 10% of the variance in both bad dropout and good dropout. The ranking of providers fluctuated by type of dropout. Provider assessments based on patient retention should offer a way to isolate dropout type, given that nonspecific metrics may lead to biased estimates of performance.
Integrated care in the management of chronic diseases: an Italian perspective.
Stefani, Ilario; Scolari, Francesca; Croce, Davide; Mazzone, Antonino
2016-12-01
This letter provides a view on the issue of the organizational model of Primary Care Groups (PCGs), which represent a best practice in continuity and appropriateness of care for chronic patients. Our analysis aimed at estimating the impact of PCGs introduction in terms of efficiency and effectiveness. The results of our study showed a better performance of PCGs compared with the other General Practitioners of Local Health Authority Milano 1, supporting the conclusion that good care cannot be delivered without good organization of care. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Turbulent stresses in the surf-zone: Which way is up?
Haines, John W.; Gelfenbaum, Guy; Edge, B.L
1997-01-01
Velocity observations from a vertical stack of three-component Acoustic Doppler Velocimeters (ADVs) within the energetic surf-zone are presented. Rapid temporal sampling and small sampling volume provide observations suitable for investigation of the role of turbulent fluctuations in surf-zone dynamics. While sensor performance was good, failure to recover reliable measures of tilt from the vertical compromise the data value. We will present some cursory observations supporting the ADV performance, and examine the sensitivity of stress estimates to uncertainty in the sensor orientation. It is well known that turbulent stress estimates are highly sensitive to orientation relative to vertical when wave motions are dominant. Analyses presented examine the potential to use observed flow-field characteristics to constrain sensor orientation. Results show that such an approach may provide a consistent orientation to a fraction of a degree, but the inherent sensitivity of stress estimates requires a still more restrictive constraint. Regardless, the observations indicate the degree to which stress estimates are dependent on orientation, and provide some indication of the temporal variability in time-averaged stress estimates.
Buczinski, S; Vandeweerd, J M
2016-09-01
Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Mudarri, David H
2014-05-01
While a number of studies have addressed the economic cost associated with adverse health and productivity effects of poor indoor air quality (IAQ), few have addressed the value of economic expenditures and job creation associated with this industry. This article estimates that the annual sale of IAQ products and services is valued at $18-$30 billion and is associated with approximately 150,000-250,000 current jobs. Compared with other familiar industries, the IAQ market remains relatively small. Given the close association between good IAQ and both job performance of adults and learning performance of children, however, the expenditure to maintain good IAQ in commercial and educational facilities is a useful complement to programs designed to improve education and economic growth.
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A.
2017-01-01
Abstract Study Objectives: To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test–retest reliability of sleep diary estimates of school night sleep across 12 weeks. Methods: Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test–retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Results: Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test–rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. Conclusion: We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks. PMID:28199718
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Short, Michelle A; Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A
2017-03-01
To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test-retest reliability of sleep diary estimates of school night sleep across 12 weeks. Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test-retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test-rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng
2017-12-01
Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.
Development and control of a magnetorheological haptic device for robot assisted surgery.
Shokrollahi, Elnaz; Goldenberg, Andrew A; Drake, James M; Eastwood, Kyle W; Kang, Matthew
2017-07-01
A prototype magnetorheological (MR) fluid-based actuator has been designed for tele-robotic surgical applications. This device is capable of generating forces up to 47 N, with input currents ranging from 0 to 1.5 A. We begin by outlining the physical design of the device, and then discuss a novel nonlinear model of the device's behavior. The model was developed using the Hammerstein-Wiener (H-W) nonlinear black-box technique and is intended to accurately capture the hysteresis behavior of the MR-fluid. Several experiments were conducted on the device to collect estimation and validation datasets to construct the model and assess its performance. Different estimating functions were used to construct the model, and their effectiveness is assessed based on goodness-of-fit and final-prediction-error measurements. A sigmoid network was found to have a goodness-of-fit of 95%. The model estimate was then used to tune a PID controller. Two control schemes were proposed to eliminate the hysteresis behavior present in the MR fluid device. One method uses a traditional force feedback control loop and the other is based on measuring the magnetic field using a Hall-effect sensor embedded within the device. The Hall-effect sensor scheme was found to be superior in terms of cost, simplicity and real-time control performance compared to the force control strategy.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Estimating the abundance of mouse populations of known size: promises and pitfalls of new methods
Conn, P.B.; Arthur, A.D.; Bailey, L.L.; Singleton, G.R.
2006-01-01
Knowledge of animal abundance is fundamental to many ecological studies. Frequently, researchers cannot determine true abundance, and so must estimate it using a method such as mark-recapture or distance sampling. Recent advances in abundance estimation allow one to model heterogeneity with individual covariates or mixture distributions and to derive multimodel abundance estimators that explicitly address uncertainty about which model parameterization best represents truth. Further, it is possible to borrow information on detection probability across several populations when data are sparse. While promising, these methods have not been evaluated using mark?recapture data from populations of known abundance, and thus far have largely been overlooked by ecologists. In this paper, we explored the utility of newly developed mark?recapture methods for estimating the abundance of 12 captive populations of wild house mice (Mus musculus). We found that mark?recapture methods employing individual covariates yielded satisfactory abundance estimates for most populations. In contrast, model sets with heterogeneity formulations consisting solely of mixture distributions did not perform well for several of the populations. We show through simulation that a higher number of trapping occasions would have been necessary to achieve good estimator performance in this case. Finally, we show that simultaneous analysis of data from low abundance populations can yield viable abundance estimates.
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Sun, Kai; Wang, Jianhui
In this paper, in order to enhance the numerical stability of the unscented Kalman filter (UKF) used for power system dynamic state estimation, a new UKF with guaranteed positive semidifinite estimation error covariance (UKFGPS) is proposed and compared with five existing approaches, including UKFschol, UKF-kappa, UKFmodified, UKF-Delta Q, and the squareroot UKF (SRUKF). These methods and the extended Kalman filter (EKF) are tested by performing dynamic state estimation on WSCC 3-machine 9-bus system and NPCC 48-machine 140-bus system. For WSCC system, all methods obtain good estimates. However, for NPCC system, both EKF and the classic UKF fail. It is foundmore » that UKFschol, UKF-kappa, and UKF-Delta Q do not work well in some estimations while UKFGPS works well in most cases. UKFmodified and SRUKF can always work well, indicating their better scalability mainly due to the enhanced numerical stability.« less
Sliding mode control based on Kalman filter dynamic estimation of battery SOC
NASA Astrophysics Data System (ADS)
He, Dongmeia; Hou, Enguang; Qiao, Xin; Liu, Guangmin
2018-06-01
Lithium-ion battery charge state of the accurate and rapid estimation of battery management system is the key technology. In this paper, an exponentially reaching law sliding-mode variable structure control algorithm based on Kalman filter is proposed to estimate the state of charge of Li-ion battery for the dynamic nonlinear system. The RC equivalent circuit model is established, and the model equation with specific structure is given. The proposed Kalman filter sliding mode structure is used to estimate the state of charge of the battery in the battery model, and the jitter effect can be avoided and the estimation performance can be improved. The simulation results show that the proposed Kalman filter sliding mode control has good accuracy in estimating the state of charge of the battery compared with the ordinary Kalman filter, and the error range is within 3%.
Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi
2015-07-01
Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.
Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V
2015-09-01
Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling
2018-01-01
We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.
The indicator performance estimate approach to determining acceptable wilderness conditions
NASA Astrophysics Data System (ADS)
Hollenhorst, Steven; Gardner, Lisa
1994-11-01
Using data from a study conducted in the Cranberry Wilderness Area of West Virginia, United States, this paper describes how a modified importance—performance approach can be used to prioritize wilderness indicators and determine how much change from the pristine is acceptable. The approach uses two key types of information: (1) indicator importance, or visitor opinion as to which wilderness indicators have the greatest influence on their experience, and (2) management performance, or the extent to which actual indicator conditions exceed or are within visitor expectations. Performance was represented by calculating indicator performance estimates (IPEs), as defined by standardized differences between actual conditions and visitor preferences for each indicator. The results for each indicator are then presented graphically on a four-quadrant matrix for objective interpretation. Each quadrant represents a management response: keep up the good work, concentrate here, low priority, or possible overkill. The technique allows managers to more systematically and effectively utilize information routinely collected during the limits of acceptable change wilderness planning process.
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
A mixture model for robust registration in Kinect sensor
NASA Astrophysics Data System (ADS)
Peng, Li; Zhou, Huabing; Zhu, Shengguo
2018-03-01
The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.
A new method for parameter estimation in nonlinear dynamical equations
NASA Astrophysics Data System (ADS)
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Review of recycling performance indicators: A study on collection rate in Taiwan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen Lihchyi; Lin Chunhsu; Lee, Soo-cheol
2009-08-15
The Taiwan Environmental Protection Administration (Taiwan EPA) launched a national Extended Producer Responsibility (EPR) system after integrating eight private recycling organizations in 1998. After that, the environmental performance of the EPR system brought a lot of attention to policy makers. Many studies show positive environmental effects of the EPR system in Taiwan. However, there are controversial questions remained, such as whether the performance indicators used are the right choice to estimate the environmental effects of the recycling policy? Can those estimated results really reflect the performance of the system? This paper would therefore like to more accurately evaluate the performancemore » indicators of the EPR system based on data observed over the past decade in Taiwan. In the process of evaluating the performance indicators, we have found that the collection rates for durable goods are often ignored in countries that pursue a zero waste policy. This may affect the actual recycling outcome and resource direction targeted by producers. However, in order for the collection rate to be adopted as a policy indicator, how to estimate the amounts of retired or waste products during a period is critical. In this paper, we estimate the collection rate for electrical and electronic waste by using the survival analysis and ownership data analysis approaches. We also provide a comparison of both approaches and put forward suggestions for directions in the future in solid waste management.« less
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
State of Charge estimation of lithium ion battery based on extended Kalman filtering algorithm
NASA Astrophysics Data System (ADS)
Yang, Fan; Feng, Yiming; Pan, Binbiao; Wan, Renzhuo; Wang, Jun
2017-08-01
Accurate estimation of state-of-charge (SOC) for lithium ion battery is crucial for real-time diagnosis and prognosis in green energy vehicles. In this paper, a state space model of the battery based on Thevenin model is adopted. The strategy of estimating state of charge (SOC) based on extended Kalman fil-ter is presented, as well as to combine with ampere-hour counting (AH) and open circuit voltage (OCV) methods. The comparison between simulation and experiments indicates that the model’s performance matches well with that of lithium ion battery. The algorithm of extended Kalman filter keeps a good accura-cy precision and less dependent on its initial value in full range of SOC, which is proved to be suitable for online SOC estimation.
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
2013-01-01
Summary of background data Recent smartphones, such as the iPhone, are often equipped with an accelerometer and magnetometer, which, through software applications, can perform various inclinometric functions. Although these applications are intended for recreational use, they have the potential to measure and quantify range of motion. The purpose of this study was to estimate the intra and inter-rater reliability as well as the criterion validity of the clinometer and compass applications of the iPhone in the assessment cervical range of motion in healthy participants. Methods The sample consisted of 28 healthy participants. Two examiners measured cervical range of motion of each participant twice using the iPhone (for the estimation of intra and inter-reliability) and once with the CROM (for the estimation of criterion validity). Estimates of reliability and validity were then established using the intraclass correlation coefficient (ICC). Results We observed a moderate intra-rater reliability for each movement (ICC = 0.65-0.85) but a poor inter-rater reliability (ICC < 0.60). For the criterion validity, the ICCs are moderate (>0.50) to good (>0.65) for movements of flexion, extension, lateral flexions and right rotation, but poor (<0.50) for the movement left rotation. Conclusion We found good intra-rater reliability and lower inter-rater reliability. When compared to the gold standard, these applications showed moderate to good validity. However, before using the iPhone as an outcome measure in clinical settings, studies should be done on patients presenting with cervical problems. PMID:23829201
Muñoz, Mario A; Smith-Miles, Kate A
2017-01-01
This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
Forkmann, Thomas; Kroehne, Ulf; Wirtz, Markus; Norra, Christine; Baumeister, Harald; Gauggel, Siegfried; Elhan, Atilla Halil; Tennant, Alan; Boecker, Maren
2013-11-01
This study conducted a simulation study for computer-adaptive testing based on the Aachen Depression Item Bank (ADIB), which was developed for the assessment of depression in persons with somatic diseases. Prior to computer-adaptive test simulation, the ADIB was newly calibrated. Recalibration was performed in a sample of 161 patients treated for a depressive syndrome, 103 patients from cardiology, and 103 patients from otorhinolaryngology (mean age 44.1, SD=14.0; 44.7% female) and was cross-validated in a sample of 117 patients undergoing rehabilitation for cardiac diseases (mean age 58.4, SD=10.5; 24.8% women). Unidimensionality of the itembank was checked and a Rasch analysis was performed that evaluated local dependency (LD), differential item functioning (DIF), item fit and reliability. CAT-simulation was conducted with the total sample and additional simulated data. Recalibration resulted in a strictly unidimensional item bank with 36 items, showing good Rasch model fit (item fit residuals<|2.5|) and no DIF or LD. CAT simulation revealed that 13 items on average were necessary to estimate depression in the range of -2 and +2 logits when terminating at SE≤0.32 and 4 items if using SE≤0.50. Receiver Operating Characteristics analysis showed that θ estimates based on the CAT algorithm have good criterion validity with regard to depression diagnoses (Area Under the Curve≥.78 for all cut-off criteria). The recalibration of the ADIB succeeded and the simulation studies conducted suggest that it has good screening performance in the samples investigated and that it may reasonably add to the improvement of depression assessment. © 2013.
Lago-Peñas, Carlos; Sampaio, Jaime
2015-01-01
The aim of the current study was (i) to identify how important is a good season start on elite soccer teams' performance and (ii) to examine whether this impact is related to the clubs' financial budget. The match performances and annual budgets of all teams were collected from the English FA Premier League, French Ligue 1, Spanish La Liga, Italian Serie A and German Bundesliga for three consecutive seasons (2010-2011 to 2012-2013). A k-means cluster analysis classified the clubs according to their budget as High Range Budget Clubs, Upper-Mid Range Budget Clubs, Lower-Mid Range Budget Clubs and Low Range Budget Clubs. Data were examined through linear regression models. Overall, the results suggested that the better the team performance at the beginning of the season, the better the ranking at the end of the season. However, the impact of the effect depended on the clubs' annual budget, with lower budgets being associated with a greater importance of having a good season start (P < 0.01). Moreover, there were differences in trends across the different leagues. These variables can be used to develop accurate models to estimate final rankings. Conversely, Lower-Mid and Lower Range Budget Clubs can benefit from fine-tuning preseason planning in order to accelerate the acquisition of optimal performances.
Spiral microstrip hyperthermia applicators: technical design and clinical performance.
Samulski, T V; Fessenden, P; Lee, E R; Kapp, D S; Tanabe, E; McEuen, A
1990-01-01
Spiral microstrip microwave (MW) antennas have been developed and adapted for use as clinical hyperthermia applicators. The design has been configured in a variety of forms including single fixed antenna applicators, multi-element arrays, and mechanically scanned single or paired antennas. The latter three configurations have been used to allow an expansion of the effective heating area. Specific absorption rate (SAR) distributions measured in phantom have been used to estimate the depth and volume of effective heating. The estimates are made using the bioheat equation assuming uniformly perfused tissue. In excess of 500 treatments of patients with advanced or recurrent localized superficial tumors have been performed using this applicator technology. Data from clinical treatments have been analyzed to quantify the heating performance and verify the suitability of these applicators for clinical use. Good microwave coupling efficiency together with the compact applicator size have proved to be valuable clinical assets.
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
Direct estimation of human trabecular bone stiffness using cone beam computed tomography.
Klintström, Eva; Klintström, Benjamin; Pahr, Dieter; Brismar, Torkel B; Smedby, Örjan; Moreno, Rodrigo
2018-04-10
The aim of this study was to evaluate the possibility of estimating the biomechanical properties of trabecular bone through finite element simulations by using dental cone beam computed tomography data. Fourteen human radius specimens were scanned in 3 cone beam computed tomography devices: 3-D Accuitomo 80 (J. Morita MFG., Kyoto, Japan), NewTom 5 G (QR Verona, Verona, Italy), and Verity (Planmed, Helsinki, Finland). The imaging data were segmented by using 2 different methods. Stiffness (Young modulus), shear moduli, and the size and shape of the stiffness tensor were studied. Corresponding evaluations by using micro-CT were regarded as the reference standard. The 3-D Accuitomo 80 (J. Morita MFG., Kyoto, Japan) showed good performance in estimating stiffness and shear moduli but was sensitive to the choice of segmentation method. NewTom 5 G (QR Verona, Verona, Italy) and Verity (Planmed, Helsinki, Finland) yielded good correlations, but they were not as strong as Accuitomo 80 (J. Morita MFG., Kyoto, Japan). The cone beam computed tomography devices overestimated both stiffness and shear compared with the micro-CT estimations. Finite element-based calculations of biomechanics from cone beam computed tomography data are feasible, with strong correlations for the Accuitomo 80 scanner (J. Morita MFG., Kyoto, Japan) combined with an appropriate segmentation method. Such measurements might be useful for predicting implant survival by in vivo estimations of bone properties. Copyright © 2018 Elsevier Inc. All rights reserved.
Bearings Only Air-to-Air Ranging
1988-07-25
directly in fiut of the observer whem first detected, more time will be needed for a good estimate. A sound uinp them is for the observer, having...altitude angle to provide an estimate of the z component. Moving targets commonly require some 60 seconds for good estimates of target location and...fixed target case, where a good strategy for the observer can be determined a priori, highly effective maneuvers for the observer in the case of a moving
Stage-discharge relationship in tidal channels
NASA Astrophysics Data System (ADS)
Kearney, W. S.; Mariotti, G.; Deegan, L.; Fagherazzi, S.
2016-12-01
Long-term records of the flow of water through tidal channels are essential to constrain the budgets of sediments and biogeochemical compounds in salt marshes. Statistical models which relate discharge to water level allow the estimation of such records from more easily obtained records of water stage in the channel. While there is clearly structure in the stage-discharge relationship, nonlinearity and nonstationarity of the relationship complicates the construction of statistical stage-discharge models with adequate performance for discharge estimation and uncertainty quantification. Here we compare four different types of stage-discharge models, each of which is designed to capture different characteristics of the stage-discharge relationship. We estimate and validate each of these models on a two-month long time series of stage and discharge obtained with an Acoustic Doppler Current Profiler in a salt marsh channel. We find that the best performance is obtained by models which account for the nonlinear and time-varying nature of the stage-discharge relationship. Good performance can also be obtained from a simplified version of these models which approximates the fully nonlinear and time-varying models with a piecewise linear formulation.
NASA Astrophysics Data System (ADS)
Tierney, Craig Cristy
Presented here are several investigations of ocean tides derived from TOPEX/POSEIDON (T/P) altimetry and numerical models. The purpose of these investigations is to study the short wavelength features in the T/P data and to preserve these wavelengths in global ocean tide models that are accurate in shallow and deep waters. With these new estimates, effects of the tides on loading, Earth's rotation, and tidal energetics are studied. To preserve tidal structure, tides have been estimated along the ground track of T/P by the harmonic and response methods using 4.5 years of data. Results show the two along-track (AT) estimates agree with each other and with other tide models for those components with minimal aliasing problems. Comparisons to global models show that there is tidal structure in the T/P data that is not preserved with current gridding methods. Error estimates suggest there is accurate information in the T/P data from shallow waters that can be used to improve tidal models. It has been shown by Ray and Mitchum (1996) that the first mode baroclinic tide can be separated from AT tide estimates by filtering. This method has been used to estimate the first mode semidiurnal baroclinic tides globally. Estimates for M2 show good correlation with known regions of baroclinic tide generation. Using gridded, filtered AT estimates, a lower bound on the energy contained in the M2 baroclinic tide is 50 PJ. Inspired by the structure found in the AT estimates, a gridding method is presented that preserves tidal structure in the T/P data. These estimates are assimilated into a nonlinear, finite difference, global barotropic tidal model. Results from the 8 major tidal constituents show the model performs equivalently to other models in the deep waters, and is significantly better in the shallow waters. Crossover variance is reduced from 14 cm to 10 cm in the shallow waters. Comparisons to Earth rotation show good agreement to results from VLBI data. Tidal energetics computed from the models show good agreement with previous results. PE/KE ratios and quality factors are more consistent in each frequency band than in previous results.
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.
Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha
2017-03-01
This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.
The use of neural network technology to model swimming performance.
Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida
2007-01-01
to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance).
Development of a reactive-dispersive plume model
NASA Astrophysics Data System (ADS)
Kim, Hyun S.; Kim, Yong H.; Song, Chul H.
2017-04-01
A reactive-dispersive plume model (RDPM) was developed in this study. The RDPM can consider two main components of large-scale point source plume: i) turbulent dispersion and ii) photochemical reactions. In order to evaluate the simulation performance of newly developed RDPM, the comparisons between the model-predicted and observed mixing ratios were made using the TexAQS II 2006 (Texas Air Quality Study II 2006) power-plant experiment data. Statistical analyses show good correlation (0.61≤R≤0.92), and good agreement with the Index of Agreement (0.70≤R≤0.95). The chemical NOx lifetimes for two power-plant plumes (Monticello and Welsh power plants) were also estimated.
Vigan, Marie; Stirnemann, Jérôme; Mentré, France
2014-05-01
Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.
Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud
2017-01-01
In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo
2016-07-01
The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.
Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.
Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J
2016-03-01
To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.
NASA Astrophysics Data System (ADS)
Srivastava, Prashant K.; Petropoulos, George P.; Gupta, Manika; Islam, Tanvir
2015-04-01
Soil Moisture Deficit (SMD) is a key variable in the water and energy exchanges that occur at the land-surface/atmosphere interface. Monitoring SMD is an alternate method of irrigation scheduling and represents the use of the suitable quantity of water at the proper time by combining measurements of soil moisture deficit. In past it is found that LST has a strong relation to SMD, which can be estimated by MODIS or numerical weather prediction model such as WRF (Weather Research and Forecasting model). By looking into the importance of SMD, this work focused on the application of Artificial Neural Network (ANN) for evaluating its capabilities towards SMD estimation using the LST data estimated from MODIS and WRF mesoscale model. The benchmark SMD estimated from Probability Distribution Model (PDM) over the Brue catchment, Southwest of England, U.K. is used for all the calibration and validation experiments. The performances between observed and simulated SMD are assessed in terms of the Nash-Sutcliffe Efficiency (NSE), the Root Mean Square Error (RMSE) and the percentage of bias (%Bias). The application of the ANN confirmed a high capability WRF and MODIS LST for prediction of SMD. Performance during the ANN calibration and validation showed a good agreement between benchmark and estimated SMD with MODIS LST information with significantly higher performance than WRF simulated LST. The work presented showed the first comprehensive application of LST from MODIS and WRF mesoscale model for hydrological SMD estimation, particularly for the maritime climate. More studies in this direction are recommended to hydro-meteorological community, so that useful information will be accumulated in the technical literature domain for different geographical locations and climatic conditions. Keyword: WRF, Land Surface Temperature, MODIS satellite, Soil Moisture Deficit, Neural Network
49 CFR 375.409 - May household goods brokers provide estimates?
Code of Federal Regulations, 2010 CFR
2010-10-01
... REGULATIONS TRANSPORTATION OF HOUSEHOLD GOODS IN INTERSTATE COMMERCE; CONSUMER PROTECTION REGULATIONS... there is a written agreement between the broker and you, the carrier, adopting the broker's estimate as...
2010-01-01
Background Tasks chosen to evaluate motor performance should reflect the movement deficits characteristic of the target population and present an appropriate challenge for the patients who would be evaluated. A reaching task that evaluates impairment characteristics of people with shoulder impingement syndrome (SIS) was developed to evaluate the motor performance of this population. The objectives of this study were to characterize the reproducibility of this reaching task in people with and without SIS and to evaluate the impact of the number of trials on reproducibility. Methods Thirty subjects with SIS and twenty healthy subjects participated in the first measurement session to evaluate intrasession reliability. Ten healthy subjects were retested within 2 to 7 days to assess intersession reliability. At each measurement session, upper extremity kinematic patterns were evaluated during a reaching task. Ten trials were recorded. Thereafter, the upper extremity position at the end of reaching and total joint excursion that occurred during reaching were calculated. Intraclass correlation coefficient (ICC) and minimal detectable change (MDC) were used to estimate intra and intersession reliability. Results Intrasession reliability for total joint excursion was good to very good when based on the first two trials (0.77
Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš
2017-05-01
The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Sinzobahamvya, Nicodème; Photiadis, Joachim; Arenz, Claudia; Kopp, Thorsten; Hraska, Viktor; Asfour, Boulos
2010-06-01
The Disease-Related Groups (DRGs) system postulates that inpatient stays with similar levels of clinical complexity are expected to consume similar amounts of resources. This, applied to surgery of congenital heart disease, suggests that the higher the complexity of procedures as estimated by the Aristotle complexity score, the higher hospital reimbursement should be. This study analyses how much case-mix index (CMI) generated by German DRG 2009 version correlates with Aristotle score. A total of 456 DRG cases of year 2008 were regrouped according to German DRG 2009 and related cost-weight values and overall CMI evaluated. Corresponding Aristotle basic and comprehensive complexity scores (ABC and ACC) and levels were determined. Associated surgical performance (Aristotle score times hospital survival) was estimated. Spearman 'r' correlation coefficients were calculated between Aristotle scores and cost-weights. Goodness of fit 'r(2)' from derived regression was determined. Correlation was estimated to be optimal if Spearman 'r' and derived goodness of fit 'r(2)' approached 1 value. CMI was 8.787 while mean ABC and ACC scores were 7.64 and 9.27, respectively. Hospital survival was 98.5%: therefore, surgical performance attained 7.53 (ABC score) and 9.13 (ACC score). ABC and ACC scores and levels positively correlated with cost-weights. With Spearman 'r' of 1 and goodness of fit 'r(2)' of 0.9790, scores of the six ACC levels correlated at best. The equation was y = 0.5591 + 0.939x, in which y stands for cost-weight (CMI) and x for score of ACC level. ACC score correlates almost perfectly with corresponding cost-weights (CMI) generated by the German DRG 2009. It could therefore be used as the basis for hospital reimbursement to compensate in conformity with procedures' complexity. Extrapolated CMI in this series would be 9.264. Modulation of reimbursement according to surgical performance could be established and thus 'reward' quality in congenital heart surgery. Copyright 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.
López Expósito, Patricio; Blanco Suárez, Angeles; Negro Álvarez, Carlos
2017-02-10
Fast and reliable methods to determine biomass concentration are necessary to facilitate the large scale production of microalgae. A method for the rapid estimation of Chlorella sorokiniana biomass concentration was developed. The method translates the suspension particle size spectrum gathered though laser reflectance into biomass concentration by means of two machine learning modelling techniques. In each case, the model hyper-parameters were selected applying a simulated annealing algorithm. The results show that dry biomass concentration can be estimated with a very good accuracy (R 2 =0.87). The presented method seems to be suited to perform fast estimations of biomass concentration in suspensions of microalgae cultivated in moderately turbid media with tendency to aggregate. Copyright © 2017 Elsevier B.V. All rights reserved.
Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao
2011-01-01
A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride (Rf value of 0.55±0.02) and pantoprazole sodium (Rf value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance–absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9988±0.0012 in the concentration range of 100–400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9990±0.0008 in the concentration range of 200–1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method. PMID:29403710
Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao
2011-11-01
A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F 254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride ( R f value of 0.55±0.02) and pantoprazole sodium ( R f value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance-absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9988±0.0012 in the concentration range of 100-400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9990±0.0008 in the concentration range of 200-1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method.
Needlet estimation of cross-correlation between CMB lensing maps and LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianchini, Federico; Renzi, Alessandro; Marinucci, Domenico, E-mail: fbianchini@sissa.it, E-mail: renzi@mat.uniroma2.it, E-mail: marinucc@mat.uniroma2.it
In this paper we develop a novel needlet-based estimator to investigate the cross-correlation between cosmic microwave background (CMB) lensing maps and large-scale structure (LSS) data. We compare this estimator with its harmonic counterpart and, in particular, we analyze the bias effects of different forms of masking. In order to address this bias, we also implement a MASTER-like technique in the needlet case. The resulting estimator turns out to have an extremely good signal-to-noise performance. Our analysis aims at expanding and optimizing the operating domains in CMB-LSS cross-correlation studies, similarly to CMB needlet data analysis. It is motivated especially by nextmore » generation experiments (such as Euclid) which will allow us to derive much tighter constraints on cosmological and astrophysical parameters through cross-correlation measurements between CMB and LSS.« less
QFASAR: Quantitative fatty acid signature analysis with R
Bromaghin, Jeffrey F.
2017-01-01
Knowledge of predator diets provides essential insights into their ecology, yet diet estimation is challenging and remains an active area of research.Quantitative fatty acid signature analysis (QFASA) is a popular method of estimating diet composition that continues to be investigated and extended. However, software to implement QFASA has only recently become publicly available.I summarize a new R package, qfasar, for diet estimation using QFASA methods. The package also provides functionality to evaluate and potentially improve the performance of a library of prey signature data, compute goodness-of-fit diagnostics, and support simulation-based research. Several procedures in the package have not previously been published.qfasar makes traditional and recently published QFASA diet estimation methods accessible to ecologists for the first time. Use of the package is illustrated with signature data from Chukchi Sea polar bears and potential prey species.
Tisnado, Diana; Malin, Jennifer; Kahn, Katherine; Landrum, Mary Beth; Fletcher, Robert; Klabunde, Carrie; Clauser, Steven; Rogers, Selwyn O; Keating, Nancy L
2016-07-01
Chemotherapy prolongs survival in patients with advanced non-small-cell lung cancer. However, few studies have included patients with poor performance status. This study examined rates of oncologists' recommendations for chemotherapy by patient performance status and symptoms and how physician characteristics influence chemotherapy recommendations. We surveyed medical oncologists involved in the care of a population-based cohort of patients with lung cancer from the CanCORS (Cancer Care Outcomes Research and Surveillance) study. Physicians were queried about their likelihood to recommend chemotherapy to patients with stage IV lung cancer with varying performance status (Eastern Cooperative Oncology Group performance status 0 [good] v 3 [poor]) and presence or absence of tumor-related pain. Repeated measures logistic regression was used to estimate the independent associations of patients' performance status and symptoms and physicians' demographic and practice characteristics with chemotherapy recommendations. Nearly all physicians (adjusted rate, 97% to 99%) recommended chemotherapy for patients with good performance status, and approximately half (adjusted rate, 38% to 53%) recommended chemotherapy for patients with poor performance status (P < .001). Compared with patient factors, physician and practice characteristics were less strongly associated with chemotherapy recommendations in adjusted analyses. Strong consensus among oncologists exists for chemotherapy in patients with advanced non-small-cell lung cancer and good performance status. However, the relatively high rate of chemotherapy recommendations for patients with poor performance status despite the unfavorable risk-benefit profile highlights the need for ongoing work to define high-value care in oncology and to implement and evaluate strategies to align incentives for such care. Copyright © 2016 by American Society of Clinical Oncology.
Symonides, Bartosz; Wojciechowska, Ewa; Gryglas, Adam; Gaciong, Zbigniew
2017-01-01
Background Primary hyperaldosteronism may be associated with elevated 24-hour urinary potassium excretion. We evaluated the diagnostic value of spot urine (SU) potassium as an index of 24-hour urinary potassium excretion. Methods We measured SU and 24-hour urinary collection potassium and creatinine in 382 patients. Correlations between SU and 24-hour collections were assessed for potassium levels and potassium/creatinine ratios. We used the PAHO formula to estimate 24-hour urinary potassium excretion based on SU potassium level. The agreement between estimated and measured 24-hour urinary potassium excretion was evaluated using the Bland-Altman method. To evaluate diagnostic performance of SU potassium, we calculated areas under the curve (AUC) for SU potassium/creatinine ratio and 24-hour urinary potassium excretion estimated using the PAHO formula. Results Strongest correlation between SU and 24-hour collection was found for potassium/creatinine ratio (r = 0.69, P<0.001). The PAHO formula underestimated 24-hour urinary potassium excretion by mean 8.3±18 mmol/d (95% limits of agreement -28 to +44 mmol/d). Diagnostic performance of SU potassium/creatinine ratio was borderline good only if 24-hour urinary potassium excretion was largely elevated (AUC 0.802 for 120 mmol K+/24 h) but poor with lower values (AUC 0.696 for 100 mmol K+/24 h, 0.636 for 80 mmol K+/24 h, 0.675 for 40 mmol K+/24 h). Diagnostic performance of 24-hour urinary potassium excretion estimated by the PAHO formula was excellent with values above 120 mmol/d and good with lower values (AUC 0.941 for 120 mmol K+/24 h, 0.819 for 100 mmol K+/24 h, 0.823 for 80 mmol K+/24 h, 0.836 for 40 mmol K+/24 h). Conclusions Spot urine potassium/creatinine ratio might be a marker of increased 24-hour urinary potassium excretion and a potentially useful screening test when reliable 24-hour urine collection is not available. The PAHO formula allowed estimation of the 24-hour urinary potassium excretion based on SU measurements with reasonable clinical accuracy. PMID:28662194
NASA Astrophysics Data System (ADS)
Beck, H.; Vergopolan, N.; Pan, M.; Levizzani, V.; van Dijk, A.; Weedon, G. P.; Brocca, L.; Huffman, G. J.; Wood, E. F.; William, L.
2017-12-01
We undertook a comprehensive evaluation of 22 gridded (quasi-)global (sub-)daily precipitation (P) datasets for the period 2000-2016. Twelve non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76,086 gauges worldwide. Another ten gauge-corrected ones were evaluated using hydrological modeling, by calibrating the conceptual model HBV against streamflow records for each of 9053 small to medium-sized (<50,000 km2) catchments worldwide, and comparing the resulting performance. Marked differences in spatio-temporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite- and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR), the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7) or near-surface soil moisture (SM2RAIN-ASCAT), and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PERSIANN, and PERSIANN-CCS). Two of the three reanalyses (ERA-Interim and JRA-55) unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified and MSWEP V1.2 and V2.0) generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU), which in turn outperformed those indirectly incorporating gauge data through other multi-source datasets (PERSIANN-CDR V1R1 and PGF). Our results highlight large differences in estimation accuracy, and hence, the importance of P dataset selection in both research and operational applications. The good performance of MSWEP emphasizes that careful data merging can exploit the complementary strengths of gauge-, satellite- and reanalysis-based P estimates.
Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G
2008-09-01
A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Reeslev, M.; Miller, M.; Nielsen, K. F.
2003-01-01
Two mold species, Stachybotrys chartarum and Aspergillus versicolor, were inoculated onto agar overlaid with cellophane, allowing determination of a direct measurement of biomass density by weighing. Biomass density, ergosterol content, and beta-N-acetylhexosaminidase (3.2.1.52) activity were monitored from inoculation to stationary phase. Regression analysis showed a good linear correlation to biomass density for both ergosterol content and beta-N-acetylhexosaminidase activity. The same two mold species were inoculated onto wallpapered gypsum board, from which a direct biomass measurement was not possible. Growth was measured as an increase in ergosterol content and beta-N-acetylhexosaminidase activity. A good linear correlation was seen between ergosterol content and beta-N-acetylhexosaminidase activity. From the experiments performed on agar medium, conversion factors (CFs) for estimating biomass density from ergosterol content and beta-N-acetylhexosaminidase activity were determined. The CFs were used to estimate the biomass density of the molds grown on gypsum board. The biomass densities estimated from ergosterol content and beta-N-acetylhexosaminidase activity data gave similar results, showing significantly slower growth and lower stationary-phase biomass density on gypsum board than on agar. PMID:12839773
Transient high frequency signal estimation: A model-based processing approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, F.L.
1985-03-22
By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less
Tropospheric OH and Cl levels deduced from non-methane hydrocarbon measurements in a marine site
NASA Astrophysics Data System (ADS)
Arsene, C.; Bougiatioti, A.; Kanakidou, M.; Bonsang, B.; Mihalopoulos, N.
2007-05-01
In situ continuous hourly measurements of C2-C8 non-methane hydrocarbons (NMHCS) have been performed from March to October 2006 at two coastal locations on the island of Crete, in the Eastern Mediterranean. Well defined diurnal variations were observed for several short lived NMHCS (including ethene, propene, n-butane, n-pentane, n-hexane, 2-methyl-pentane). The daytime concentration of hydroxyl (OH) radicals estimated from these experimental data varied from 1.3×106 to ~4.0×106 radical cm-3, in good agreement with box-model simulations. In addition the relative variability of various hydrocarbon pairs (at least 7) was used to derive the tropospheric levels of Cl atoms. The Cl atom concentration has been estimated to range between 0.6×104 and 4.7×104 atom cm-3, in good agreement with gaseous hydrochloric acid (HCl) observations in the area. Such levels of Cl atoms can be of considerable importance for the oxidation capacity of the troposphere on a regional scale.
Performance Analysis of HF Band FB-MC-SS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussein Moradi; Stephen Andrew Laraway; Behrouz Farhang-Boroujeny
Abstract—In a recent paper [1] the filter bank multicarrier spread spectrum (FB-MC-SS) waveform was proposed for wideband spread spectrum HF communications. A significant benefit of this waveform is robustness against narrow and partial band interference. Simulation results in [1] demonstrated good performance in a wideband HF channel over a wide range of conditions. In this paper we present a theoretical analysis of the bit error probably for this system. Our analysis tailors the results from [2] where BER performance was analyzed for maximum ration combining systems that accounted for correlation between subcarriers and channel estimation error. Equations are give formore » BER that closely match the simulated performance in most situations.« less
An eye model for uncalibrated eye gaze estimation under variable head pose
NASA Astrophysics Data System (ADS)
Hnatow, Justin; Savakis, Andreas
2007-04-01
Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.
Information fusion methods based on physical laws.
Rao, Nageswara S V; Reister, David B; Barhen, Jacob
2005-01-01
We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.
Space shuttle propulsion estimation development verification
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.
Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar
Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...
2016-10-18
Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less
Flodin, Mats; Larsson, Anders
2009-06-01
Glomerular filtration rate (GFR) is widely accepted as the best overall measure of kidney function. Cystatin C is a novel endogenous GFR marker that has been shown to be superior to creatinine for estimation of GFR in several studies. There is a need for cystatin C assays adapted to routine chemistry instrument to minimize turnaround times and allowing 24 h/day availability. We have evaluated a new cystatin C assay developed for Architect cSystem (Abbott Laboratories, Abbott Park, IL, USA). The cystatin C assay showed good agreement with the corresponding assay from Dade Behring (Deerfield, IL, USA). The assay has a very low total imprecision and a good linearity. The new cystatin C assay is an interesting alternative to current cystatin C assays. On an Architect cSystem the assay can be performed with the same turnaround times and availability as creatinine.
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria
2007-01-01
Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading disability. Whole-head magnetoencephalography (MEG) was recorded as good and poor readers, 7-13 years of age, performed an auditory word discrimination task. We used an auditory oddball paradigm in which the ‘deviant’ stimuli (/bat/, /kat/, /rat/) differed in the degree of phonological contrast (1 vs. 3 features) from a repeated standard word (/pat/). Both good and poor readers responded more slowly to deviants that were phonologically similar compared to deviants that were phonologically dissimilar to the standard word. Source analysis of the MEG data using Minimum Norm Estimation (MNE) showed that compared to good readers, poor readers had reduced left-hemisphere activation to the most demanding phonological condition reflecting their difficulties with phonological processing. Furthermore, unlike good readers, poor readers did not show differences in activation as a function of the degree of phonological contrast. These results are consistent with a phonological account of reading disability. PMID:17675109
FRP/steel composite damage acoustic emission monitoring and analysis
NASA Astrophysics Data System (ADS)
Li, Dongsheng; Chen, Zhi
2015-04-01
FRP is a new material with good mechanical properties, such as high strength of extension, low density, good corrosion resistance and anti-fatigue. FRP and steel composite has gotten a wide range of applications in civil engineering because of its good performance. As the FRP/steel composite get more and more widely used, the monitor of its damage is also getting more important. To monitor this composite, acoustic emission (AE) is a good choice. In this study, we prepare four identical specimens to conduct our test. During the testing process, the AE character parameters and mechanics properties were obtained. Damaged properties of FRP/steel composite were analyzed through acoustic emission (AE) signals. By the growing trend of AE accumulated energy, the severity of the damage made on FRP/steel composite was estimated. The AE sentry function has been successfully used to study damage progression and fracture emerge release rate of composite laminates. This technique combines the cumulative AE energy with strain energy of the material rather than analyzes the AE information and mechanical separately.
Comparing spatial regression to random forests for large ...
Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po
NASA Astrophysics Data System (ADS)
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Zhou, Wen; Wang, Guifen; Li, Cai; Xu, Zhantang; Cao, Wenxi; Shen, Fang
2017-10-20
Phytoplankton cell size is an important property that affects diverse ecological and biogeochemical processes, and analysis of the absorption and scattering spectra of phytoplankton can provide important information about phytoplankton size. In this study, an inversion method for extracting quantitative phytoplankton cell size data from these spectra was developed. This inversion method requires two inputs: chlorophyll a specific absorption and scattering spectra of phytoplankton. The average equivalent-volume spherical diameter (ESD v ) was calculated as the single size approximation for the log-normal particle size distribution (PSD) of the algal suspension. The performance of this method for retrieving cell size was assessed using the datasets from cultures of 12 phytoplankton species. The estimations of a(λ) and b(λ) for the phytoplankton population using ESD v had mean error values of 5.8%-6.9% and 7.0%-10.6%, respectively, compared to the a(λ) and b(λ) for the phytoplankton populations using the log-normal PSD. The estimated values of C i ESD v were in good agreement with the measurements, with r 2 =0.88 and relative root mean square error (NRMSE)=25.3%, and relatively good performances were also found for the retrieval of ESD v with r 2 =0.78 and NRMSE=23.9%.
Sancheti, J. S.; Shaikh, M. F.; Khatwani, P. F.; Kulkarni, Savita R.; Sathaye, Sadhana
2013-01-01
A new robust, simple and economic high performance thin layer chromatographic method was developed for simultaneous estimation of L-glutamic acid and γ-amino butyric acid in brain homogenate. The high performance thin layer chromatographic separation of these amino acid was achieved using n-butanol:glacial acetic acid:water (22:3:5 v/v/v) as mobile phase and ninhydrin as a derivatising agent. Quantitation of the method was achieved by densitometric method at 550 nm over the concentration range of 10-100 ng/spot. This method showed good separation of amino acids in the brain homogenate with Rf value of L-glutamic acid and γ-amino butyric acid as 21.67±0.58 and 33.67±0.58, respectively. The limit of detection and limit of quantification for L-glutamic acid was found to be 10 and 20 ng and for γ-amino butyric acid it was 4 and 10 ng, respectively. The method was also validated in terms of accuracy, precision and repeatability. The developed method was found to be precise and accurate with good reproducibility and shows promising applicability for studying pathological status of disease and therapeutic significance of drug treatment. PMID:24591747
Sancheti, J S; Shaikh, M F; Khatwani, P F; Kulkarni, Savita R; Sathaye, Sadhana
2013-11-01
A new robust, simple and economic high performance thin layer chromatographic method was developed for simultaneous estimation of L-glutamic acid and γ-amino butyric acid in brain homogenate. The high performance thin layer chromatographic separation of these amino acid was achieved using n-butanol:glacial acetic acid:water (22:3:5 v/v/v) as mobile phase and ninhydrin as a derivatising agent. Quantitation of the method was achieved by densitometric method at 550 nm over the concentration range of 10-100 ng/spot. This method showed good separation of amino acids in the brain homogenate with Rf value of L-glutamic acid and γ-amino butyric acid as 21.67±0.58 and 33.67±0.58, respectively. The limit of detection and limit of quantification for L-glutamic acid was found to be 10 and 20 ng and for γ-amino butyric acid it was 4 and 10 ng, respectively. The method was also validated in terms of accuracy, precision and repeatability. The developed method was found to be precise and accurate with good reproducibility and shows promising applicability for studying pathological status of disease and therapeutic significance of drug treatment.
The performance of the Congruence Among Distance Matrices (CADM) test in phylogenetic analysis
2011-01-01
Background CADM is a statistical test used to estimate the level of Congruence Among Distance Matrices. It has been shown in previous studies to have a correct rate of type I error and good power when applied to dissimilarity matrices and to ultrametric distance matrices. Contrary to most other tests of incongruence used in phylogenetic analysis, the null hypothesis of the CADM test assumes complete incongruence of the phylogenetic trees instead of congruence. In this study, we performed computer simulations to assess the type I error rate and power of the test. It was applied to additive distance matrices representing phylogenies and to genetic distance matrices obtained from nucleotide sequences of different lengths that were simulated on randomly generated trees of varying sizes, and under different evolutionary conditions. Results Our results showed that the test has an accurate type I error rate and good power. As expected, power increased with the number of objects (i.e., taxa), the number of partially or completely congruent matrices and the level of congruence among distance matrices. Conclusions Based on our results, we suggest that CADM is an excellent candidate to test for congruence and, when present, to estimate its level in phylogenomic studies where numerous genes are analysed simultaneously. PMID:21388552
NASA Technical Reports Server (NTRS)
Troudet, T.; Garg, S.; Merrill, W.
1992-01-01
The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.
Timing performance of the silicon PET insert probe
Studen, A.; Burdette, D.; Chesi, E.; Cindro, V.; Clinthorne, N. H.; Cochran, E.; Grošičar, B.; Kagan, H.; Lacasta, C.; Linhart, V.; Mikuž, M.; Stankova, V.; Weilhammer, P.; Žontar, D.
2010-01-01
Simulation indicates that PET image could be improved by upgrading a conventional ring with a probe placed close to the imaged object. In this paper, timing issues related to a PET probe using high-resistivity silicon as a detector material are addressed. The final probe will consist of several (four to eight) 1-mm thick layers of silicon detectors, segmented into 1 × 1 mm2 pads, each pad equivalent to an independent p + nn+ diode. A proper matching of events in silicon with events of the external ring can be achieved with a good timing resolution. To estimate the timing performance, measurements were performed on a simplified model probe, consisting of a single 1-mm thick detector with 256 square pads (1.4 mm side), coupled with two VATAGP7s, application-specific integrated circuits. The detector material and electronics are the same that will be used for the final probe. The model was exposed to 511 keV annihilation photons from an 22Na source, and a scintillator (LYSO)–PMT assembly was used as a timing reference. Results were compared with the simulation, consisting of four parts: (i) GEANT4 implemented realistic tracking of electrons excited by annihilation photon interactions in silicon, (ii) calculation of propagation of secondary ionisation (electron–hole pairs) in the sensor, (iii) estimation of the shape of the current pulse induced on surface electrodes and (iv) simulation of the first electronics stage. A very good agreement between the simulation and the measurements were found. Both indicate reliable performance of the final probe at timing windows down to 20 ns. PMID:20215445
Timing performance of the silicon PET insert probe.
Studen, A; Burdette, D; Chesi, E; Cindro, V; Clinthorne, N H; Cochran, E; Grosicar, B; Kagan, H; Lacasta, C; Linhart, V; Mikuz, M; Stankova, V; Weilhammer, P; Zontar, D
2010-01-01
Simulation indicates that PET image could be improved by upgrading a conventional ring with a probe placed close to the imaged object. In this paper, timing issues related to a PET probe using high-resistivity silicon as a detector material are addressed. The final probe will consist of several (four to eight) 1-mm thick layers of silicon detectors, segmented into 1 x 1 mm(2) pads, each pad equivalent to an independent p + nn+ diode. A proper matching of events in silicon with events of the external ring can be achieved with a good timing resolution. To estimate the timing performance, measurements were performed on a simplified model probe, consisting of a single 1-mm thick detector with 256 square pads (1.4 mm side), coupled with two VATAGP7s, application-specific integrated circuits. The detector material and electronics are the same that will be used for the final probe. The model was exposed to 511 keV annihilation photons from an (22)Na source, and a scintillator (LYSO)-PMT assembly was used as a timing reference. Results were compared with the simulation, consisting of four parts: (i) GEANT4 implemented realistic tracking of electrons excited by annihilation photon interactions in silicon, (ii) calculation of propagation of secondary ionisation (electron-hole pairs) in the sensor, (iii) estimation of the shape of the current pulse induced on surface electrodes and (iv) simulation of the first electronics stage. A very good agreement between the simulation and the measurements were found. Both indicate reliable performance of the final probe at timing windows down to 20 ns.
[Estimation of the quality of life 8-year-old child with bilateral anophthalmia].
Sredzińska-Kita, Dorota; Mrugacz, Małgorzata; Bakunowicz-Łazarczyk, Alina
2009-01-01
The main aim of our work was to estimate the physical and psychomotor development and the arrangement to the daily life for a 8-year-old girl with inborn bilateral anophthalmia. The basic ophthalmic, pediatric and neurological examinations were performed with additional genetic and radiological examinations. The ophthalmic and MRI examination find out the absence of the eyeballs, optic nerves, optic chiasm, optic tracts and optic radiation. Anophthalmia limits in a big grade an independent life of the child. However, currently the girl's development indicates for good intellectual abilities what is promising for future independent life and professional work.
Improved optical flow motion estimation for digital image stabilization
NASA Astrophysics Data System (ADS)
Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao
2015-11-01
Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.
Variations in Carboxyhaemoglobin Levels in Smokers
Castleden, C. M.; Cole, P. V.
1974-01-01
Three experiments on smokers have been performed to determine variations in blood levels of carboxyhaemoglobin (COHb) throughout the day and night and whether a random measurement of COHb gives a true estimation of a smoker's mean COHb level. In the individual smoker the COHb level does not increase gradually during the day but is kept within relatively narrow limits. Moderately heavy smokers rise in the morning with a substantially raised COHb level because the half life of COHb is significantly longer during sleep than during the day. Women excrete their carbon monoxide faster than men. A random COHb estimation gives a good indication of the mean COHb level of an individual. PMID:4441877
Neuro-genetic non-invasive temperature estimation: intensity and spatial prediction.
Teixeira, César A; Ruano, M Graça; Ruano, António E; Pereira, Wagner C A
2008-06-01
The existence of proper non-invasive temperature estimators is an essential aspect when thermal therapy applications are envisaged. These estimators must be good predictors to enable temperature estimation at different operational situations, providing better control of the therapeutic instrumentation. In this work, radial basis functions artificial neural networks were constructed to access temperature evolution on an ultrasound insonated medium. The employed models were radial basis functions neural networks with external dynamics induced by their inputs. Both the most suited set of model inputs and number of neurons in the network were found using the multi-objective genetic algorithm. The neural models were validated in two situations: the operating ones, as used in the construction of the network; and in 11 unseen situations. The new data addressed two new spatial locations and a new intensity level, assessing the intensity and space prediction capacity of the proposed model. Good performance was obtained during the validation process both in terms of the spatial points considered and whenever the new intensity level was within the range of applied intensities. A maximum absolute error of 0.5 degrees C+/-10% (0.5 degrees C is the gold-standard threshold in hyperthermia/diathermia) was attained with low computationally complex models. The results confirm that the proposed neuro-genetic approach enables foreseeing temperature propagation, in connection to intensity and space parameters, thus enabling the assessment of different operating situations with proper temperature resolution.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
NASA Astrophysics Data System (ADS)
Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos
2017-12-01
We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.
The value of volume and growth measurements in timber sales management of the National Forests
NASA Technical Reports Server (NTRS)
Lietzke, K. R.
1977-01-01
This paper summarizes work performed in the estimation of gross social value of timber volume and growth rate information used in making regional harvest decisions in the National Forest System. A model was developed to permit parametric analysis. The problem is formulated as one of finding optimal inventory holding patterns. Public timber management differs from other inventory holding problems in that the inventory, itself, generates value over time in providing recreational, aesthetic and environmental goods. 'Nontimber' demand estimates are inferred from past Forest Service harvest and sales levels. The solution requires a description of the harvest rates which maintain the optimum inventory level. Gross benefits of the Landsat systems are estimated by comparison with Forest Service information gathering models. Gross annual benefits are estimated to be $5.9 million for the MSS system and $7.2 million for the TM system.
Mathematics skills in good readers with hydrocephalus.
Barnes, Marcia A; Pengelly, Sarah; Dennis, Maureen; Wilkinson, Margaret; Rogers, Tracey; Faulkner, Heather
2002-01-01
Children with hydrocephalus have poor math skills. We investigated the nature of their arithmetic computation errors by comparing written subtraction errors in good readers with hydrocephalus, typically developing good readers of the same age, and younger children matched for math level to the children with hydrocephalus. Children with hydrocephalus made more procedural errors (although not more fact retrieval or visual-spatial errors) than age-matched controls; they made the same number of procedural errors as younger, math-level matched children. We also investigated a broad range of math abilities, and found that children with hydrocephalus performed more poorly than age-matched controls on tests of geometry and applied math skills such as estimation and problem solving. Computation deficits in children with hydrocephalus reflect delayed development of procedural knowledge. Problems in specific math domains such as geometry and applied math, were associated with deficits in constituent cognitive skills such as visual spatial competence, memory, and general knowledge.
Shriner, Susan A; VanDalen, Kaci K; Root, J Jeffrey; Sullivan, Heather J
2016-02-01
The availability of a validated commercial assay is an asset for any wildlife investigation. However, commercial products are often developed for use in livestock and are not optimized for wildlife. Consequently, it is incumbent upon researchers and managers to apply commercial products appropriately to optimize program outcomes. We tested more than 800 serum samples from mallards for antibodies to influenza A virus with the IDEXX AI MultiS-Screen Ab test to evaluate assay performance. Applying the test per manufacturer's recommendations resulted in good performance with 84% sensitivity and 100% specificity. However, performance was improved to 98% sensitivity and 98% specificity by increasing the recommended cut-off. Using this alternative threshold for identifying positive and negative samples would greatly improve sample classification, especially for field samples collected months after infection when antibody titers have waned from the initial primary immune response. Furthermore, a threshold that balances sensitivity and specificity reduces estimation bias in seroprevalence estimates. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Jeziorska, Justyna; Niedzielski, Tomasz
2018-03-01
River basins located in the Central Sudetes (SW Poland) demonstrate a high vulnerability to flooding. Four mountainous basins and the corresponding outlets have been chosen for modeling the streamflow dynamics using TOPMODEL, a physically based semi-distributed topohydrological model. The model has been calibrated using the Monte Carlo approach—with discharge, rainfall, and evapotranspiration data used to estimate the parameters. The overall performance of the model was judged by interpreting the efficiency measures. TOPMODEL was able to reproduce the main pattern of the hydrograph with acceptable accuracy for two of the investigated catchments. However, it failed to simulate the hydrological response in the remaining two catchments. The best performing data set obtained Nash-Sutcliffe efficiency of 0.78. This data set was chosen to conduct a detailed analysis aiming to estimate the optimal timespan of input data for which TOPMODEL performs best. The best fit was attained for the half-year time span. The model was validated and found to reveal good skills.
Mendez Astudillo, Jorge; Lau, Lawrence; Tang, Yu-Ting; Moore, Terry
2018-02-14
As Global Navigation Satellite System (GNSS) signals travel through the troposphere, a tropospheric delay occurs due to a change in the refractive index of the medium. The Precise Point Positioning (PPP) technique can achieve centimeter/millimeter positioning accuracy with only one GNSS receiver. The Zenith Tropospheric Delay (ZTD) is estimated alongside with the position unknowns in PPP. Estimated ZTD can be very useful for meteorological applications, an example is the estimation of water vapor content in the atmosphere from the estimated ZTD. PPP is implemented with different algorithms and models in online services and software packages. In this study, a performance assessment with analysis of ZTD estimates from three PPP online services and three software packages is presented. The main contribution of this paper is to show the accuracy of ZTD estimation achievable in PPP. The analysis also provides the GNSS users and researchers the insight of the processing algorithm dependence and impact on PPP ZTD estimation. Observation data of eight whole days from a total of nine International GNSS Service (IGS) tracking stations spread in the northern hemisphere, the equatorial region and the southern hemisphere is used in this analysis. The PPP ZTD estimates are compared with the ZTD obtained from the IGS tropospheric product of the same days. The estimates of two of the three online PPP services show good agreement (<1 cm) with the IGS ZTD values at the northern and southern hemisphere stations. The results also show that the online PPP services perform better than the selected PPP software packages at all stations.
Validation of the DRAGON score in 12 stroke centers in anterior and posterior circulation.
Strbian, Daniel; Seiffge, David J; Breuer, Lorenz; Numminen, Heikki; Michel, Patrik; Meretoja, Atte; Coote, Skye; Bordet, Régis; Obach, Victor; Weder, Bruno; Jung, Simon; Caso, Valeria; Curtze, Sami; Ollikainen, Jyrki; Lyrer, Philippe A; Eskandari, Ashraf; Mattle, Heinrich P; Chamorro, Angel; Leys, Didier; Bladin, Christopher; Davis, Stephen M; Köhrmann, Martin; Engelter, Stefan T; Tatlisumak, Turgut
2013-10-01
The DRAGON score predicts functional outcome in the hyperacute phase of intravenous thrombolysis treatment of ischemic stroke patients. We aimed to validate the score in a large multicenter cohort in anterior and posterior circulation. Prospectively collected data of consecutive ischemic stroke patients who received intravenous thrombolysis in 12 stroke centers were merged (n=5471). We excluded patients lacking data necessary to calculate the score and patients with missing 3-month modified Rankin scale scores. The final cohort comprised 4519 eligible patients. We assessed the performance of the DRAGON score with area under the receiver operating characteristic curve in the whole cohort for both good (modified Rankin scale score, 0-2) and miserable (modified Rankin scale score, 5-6) outcomes. Area under the receiver operating characteristic curve was 0.84 (0.82-0.85) for miserable outcome and 0.82 (0.80-0.83) for good outcome. Proportions of patients with good outcome were 96%, 93%, 78%, and 0% for 0 to 1, 2, 3, and 8 to 10 score points, respectively. Proportions of patients with miserable outcome were 0%, 2%, 4%, 89%, and 97% for 0 to 1, 2, 3, 8, and 9 to 10 points, respectively. When tested separately for anterior and posterior circulation, there was no difference in performance (P=0.55); areas under the receiver operating characteristic curve were 0.84 (0.83-0.86) and 0.82 (0.78-0.87), respectively. No sex-related difference in performance was observed (P=0.25). The DRAGON score showed very good performance in the large merged cohort in both anterior and posterior circulation strokes. The DRAGON score provides rapid estimation of patient prognosis and supports clinical decision-making in the hyperacute phase of stroke care (eg, when invasive add-on strategies are considered).
Neural and Neural Gray-Box Modeling for Entry Temperature Prediction in a Hot Strip Mill
NASA Astrophysics Data System (ADS)
Barrios, José Angel; Torres-Alvarado, Miguel; Cavazos, Alberto; Leduc, Luis
2011-10-01
In hot strip mills, initial controller set points have to be calculated before the steel bar enters the mill. Calculations rely on the good knowledge of rolling variables. Measurements are available only after the bar has entered the mill, and therefore they have to be estimated. Estimation of process variables, particularly that of temperature, is of crucial importance for the bar front section to fulfill quality requirements, and the same must be performed in the shortest possible time to preserve heat. Currently, temperature estimation is performed by physical modeling; however, it is highly affected by measurement uncertainties, variations in the incoming bar conditions, and final product changes. In order to overcome these problems, artificial intelligence techniques such as artificial neural networks and fuzzy logic have been proposed. In this article, neural network-based systems, including neural-based Gray-Box models, are applied to estimate scale breaker entry temperature, given its importance, and their performance is compared to that of the physical model used in plant. Several neural systems and several neural-based Gray-Box models are designed and tested with real data. Taking advantage of the flexibility of neural networks for input incorporation, several factors which are believed to have influence on the process are also tested. The systems proposed in this study were proven to have better performance indexes and hence better prediction capabilities than the physical models currently used in plant.
Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Xiaoyuan; Du, Liang; Duan, Dongliang
GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less
Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach
Fan, Xiaoyuan; Du, Liang; Duan, Dongliang
2017-02-01
GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
High-performance heat pipes for heat recovery applications
NASA Technical Reports Server (NTRS)
Saaski, E. W.; Hartl, J. H.
1980-01-01
Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.
NASA Astrophysics Data System (ADS)
Lunquist, Johan
PAMELA is a satellite-borne cosmic ray experiment. Its primary scientific objective is to study the antiproton and positron components of the cosmic radiation. This will be done with unprecedented statistics over a wide energy range (~10MeV to ~100GeV). The PAMELA experiment consists of a permanent magnetic spectrometer, an electromagnetic calorimeter, a Time-of-Fight system, a neutron detector and a shower tail catcher. An anticoincidence (AC) system surrounds the spectrometer to detect particles which do not pass cleanly through the acceptance of the spectrometer. PAMELA will be mounted on a Russian Earth-observation satellite, and the launch is scheduled for 2006. The anticoincidence system for PAMELA has been developed by KTH, and consists of plastic scintillator detectors with photomultiplier tube read-out. Extensive testing has been performed during the development phase. Results are presented for environmental tests, tests with cosmic-rays and particle beams. The design of the digital part of the AC electronics has been realised on an FPGA (Field Programmable Gate Array) and a DSP (Digital Signal Processor). It records signals from the 16 AC photomultipliers and from various sensors for over-current and temperature. It also provides functionality for setting the photomultiplier discrimination thresholds, system testing, issuing alarms and communication with the PAMELA main data acquisition system. The design philosophy and functionality needs to be reliable and suitable for use in a space environment. To evaluate the performance of the AC detectors, a test utilizing cosmic-rays has been performed. The primary aim of the test was to calibrate the individual channels to gain knowledge of suitable discriminator levels for flight. A secondary aim was to estimate the AC detector efficiency. A lower limit of (99.89±0.04)% was obtained. An in-orbit simulation study was performed using protons to estimate trigger rates and investigate the AC system performance in a second level trigger. The average orbital trigger rate was estimated to be (8.4±0.6)Hz, consisting of (2.0±0.2)Hz good triggers and (6.4±0.5)Hz background. Inclusion of the AC system in the trigger condition to reduce background (for the purpose of data handling capacity) leads to losses of good triggers due to backscattering from the calorimeter (90% loss for 300GeV electrons and 25% for 100GeV protons). A method, using the calorimeter, for identifying backscattering events was investigated and found to reduce the loss of good events to below 1% (300GeV electrons) and 5% (100GeV protons), while maintaining a background reduction of 70%.
NASA Astrophysics Data System (ADS)
Dion, Lukas; Gaboury, Simon; Picard, Frédéric; Kiss, Laszlo I.; Poncsak, Sandor; Morais, Nadia
2018-04-01
Recent investigations on aluminum electrolysis cell demonstrated limitations to the commonly used tier-3 slope methodology to estimate perfluorocarbon (PFC) emissions from high-voltage anode effects (HVAEs). These limitations are greater for smelters with a reduced HVAE frequency. A novel approach is proposed to estimate the specific emissions using a tier 2 model resulting from individual HVAE instead of estimating monthly emissions for pot lines with the slope methodology. This approach considers the nonlinear behavior of PFC emissions as a function of the polarized anode effect duration but also integrates the change in behavior attributed to cell productivity. Validation was performed by comparing the new approach and the slope methodology with measurement campaigns from different smelters. The results demonstrate a good agreement between measured and estimated emissions as well as more accurately reflect individual HVAE dynamics occurring over time. Finally, the possible impact of this approach for the aluminum industry is discussed.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision.
Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz
2016-12-06
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.
Structuring as an Aid to Performance in Base-Rate Problems.
1988-06-01
Design. All subjects were given two base-rate problems, here called the Lightbulb problem (adapted from Lyon & Slovic, 1976) and the Dyslexia problem; both...are shown in Table 1. Approximately half the subjects received the Lightbulb problem first; the others received the Dyslexia problem first. The two...probability that this bulb is really defective? [the child really has dyslexia ]? You can probably give a good estimate if you think hard and carefully
The use and misuse of aircraft and missile RCS statistics
NASA Astrophysics Data System (ADS)
Bishop, Lee R.
1991-07-01
Both static and dynamic radar cross sections measurements are used for RCS predictions, but the static data are less complete than the dynamic. Integrated dynamics RCS data also have limitations for prediction radar detection performance. When raw static data are properly used, good first-order detection estimates are possible. The research to develop more-usable RCS statistics is reviewed, and windowing techniques for creating probability density functions from static RCS data are discussed.
Optimization of CW Fiber Lasers With Strong Nonlinear Cavity Dynamics
NASA Astrophysics Data System (ADS)
Shtyrina, O. V.; Efremov, S. A.; Yarutkina, I. A.; Skidin, A. S.; Fedoruk, M. P.
2018-04-01
In present work the equation for the saturated gain is derived from one-level gain equations describing the energy evolution inside the laser cavity. It is shown how to derive the parameters of the mathematical model from the experimental results. The numerically-estimated energy and spectrum of the signal are in good agreement with the experiment. Also, the optimization of the output energy is performed for a given set of model parameters.
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
Three-dimensional ultrasound strain imaging of skeletal muscles
NASA Astrophysics Data System (ADS)
Gijsbertse, K.; Sprengers, A. M. J.; Nillesen, M. M.; Hansen, H. H. G.; Lopata, R. G. P.; Verdonschot, N.; de Korte, C. L.
2017-01-01
In this study, a multi-dimensional strain estimation method is presented to assess local relative deformation in three orthogonal directions in 3D space of skeletal muscles during voluntary contractions. A rigid translation and compressive deformation of a block phantom, that mimics muscle contraction, is used as experimental validation of the 3D technique and to compare its performance with respect to a 2D based technique. Axial, lateral and (in case of 3D) elevational displacements are estimated using a cross-correlation based displacement estimation algorithm. After transformation of the displacements to a Cartesian coordinate system, strain is derived using a least-squares strain estimator. The performance of both methods is compared by calculating the root-mean-squared error of the estimated displacements with the calculated theoretical displacements of the phantom experiments. We observe that the 3D technique delivers more accurate displacement estimations compared to the 2D technique, especially in the translation experiment where out-of-plane motion hampers the 2D technique. In vivo application of the 3D technique in the musculus vastus intermedius shows good resemblance between measured strain and the force pattern. Similarity of the strain curves of repetitive measurements indicates the reproducibility of voluntary contractions. These results indicate that 3D ultrasound is a valuable imaging tool to quantify complex tissue motion, especially when there is motion in three directions, which results in out-of-plane errors for 2D techniques.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.
2006-01-01
The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.
The Use of Neural Network Technology to Model Swimming Performance
Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida
2007-01-01
The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance). PMID:24149233
Flodin, M; Jonsson, A-S; Hansson, L-O; Danielsson, L-A; Larsson, A
2007-01-01
Estimation of the glomerular filtration rate (GFR) is essential when evaluating patients with kidney disease and treating patients with drugs eliminated from the circulation by the kidneys. Cystatin C has been shown in several studies to be superior to creatinine in the estimation of GFR. At our hospitals, there is an increasing demand for cystatin C and at present we perform approximately 1500 cystatin C analyses a month. We thus need the assay available 24 h/day and to have it on our routine chemistry instrument to minimize handling time per test and time to reported test results. We have evaluated a new cystatin C immunoassay from Gentian (Gentian, Moss, Norway) on Architect ci8200 (Abbott Laboratories, Abbott Park, Ill., USA). A prerequisite at our hospital is that cystatin C results are reported as a calculated GFR in mL/min/1.73 m(2), so we also made a comparison with iohexol clearance. The Gentian cystatin C assay showed good agreement with the corresponding assay from Dade Behring (Deerfield, Ill., USA) and good inter-laboratory concordance. The assay has very low total imprecision, good linearity and strong correlation with iohexol clearance (R (2) = 0.956). The equation for the correlation curve is: y = 79.901x(-1.4389). There was low inter-laboratory variation between the three laboratories involved in the cystatin C evaluation, and thus all three laboratories can use the same equation for calculating the estimated GFR.
Grimm, Annegret; Gruber, Bernd; Henle, Klaus
2014-01-01
Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK). If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2). Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to evaluate the performance of mark-recapture population size estimators under field conditions, which is essential for selecting an appropriate method and obtaining reliable results in ecology and conservation biology, and thus for sound management. PMID:24896260
Automated extraction and validation of children's gait parameters with the Kinect.
Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco
2015-12-02
Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.
Selecting good regions to deblur via relative total variation
NASA Astrophysics Data System (ADS)
Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong
2018-03-01
Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.
Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics
Girshick, Ahna R.; Landy, Michael S.; Simoncelli, Eero P.
2011-01-01
Humans are remarkably good at performing visual tasks, but experimental measurements reveal substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. But such inference is optimal only if the observer’s internal model matches the environment. Here, we provide evidence that this is the case. We measured performance in an orientation-estimation task, demonstrating the well-known fact that orientation judgements are more accurate at cardinal (horizontal and vertical) orientations, along with a new observation that judgements made under conditions of uncertainty are strongly biased toward cardinal orientations. We estimate observers’ internal models for orientation and find that they match the local orientation distribution measured in photographs. We also show how a neural population could embed probabilistic information responsible for such biases. PMID:21642976
NASA Astrophysics Data System (ADS)
Wang, Shifeng; So, Emily; Smith, Pete
2015-04-01
Estimating the number of refugees and internally displaced persons is important for planning and managing an efficient relief operation following disasters and conflicts. Accurate estimates of refugee numbers can be inferred from the number of tents. Extracting tents from high-resolution satellite imagery has recently been suggested. However, it is still a significant challenge to extract tents automatically and reliably from remote sensing imagery. This paper describes a novel automated method, which is based on mathematical morphology, to generate a camp map to estimate the refugee numbers by counting tents on the camp map. The method is especially useful in detecting objects with a clear shape, size, and significant spectral contrast with their surroundings. Results for two study sites with different satellite sensors and different spatial resolutions demonstrate that the method achieves good performance in detecting tents. The overall accuracy can be up to 81% in this study. Further improvements should be possible if over-identified isolated single pixel objects can be filtered. The performance of the method is impacted by spectral characteristics of satellite sensors and image scenes, such as the extent of area of interest and the spatial arrangement of tents. It is expected that the image scene would have a much higher influence on the performance of the method than the sensor characteristics.
A Tracker for Broken and Closely-Spaced Lines
1997-10-01
to combine the current level flow estimate and the previous level flow estimate. However, the result is still not good enough for some reasons. First...geometric attributes are not good enough to discriminate line segments, when they are crowded, parallel and closely-spaced to each other. On the other...level information [10]. Still, it is not good at dealing with closely-spaced line segments. Because it requires a proper size of square neighborhood to
NASA Astrophysics Data System (ADS)
Miyakita, Takeshi; Hatakenaka, Ryuta; Sugita, Hiroyuki; Saitoh, Masanori; Hirai, Tomoyuki
2014-11-01
For conventional Multi-Layer Insulation (MLI) blankets, it is difficult to control the layer density and the thermal insulation performance degrades due to the increase in conductive heat leak through interlayer contacts. At low temperatures, the proportion of conductive heat transfer through MLI blankets is large compared to that of radiative heat transfer, hence the decline in thermal insulation performance is significant. A new type of MLI blanket using new spacers; the Non-Interlayer-Contact Spacer MLI (NICS MLI) has been developed. This new MLI blanket uses small discrete spacers and can exclude uncertain interlayer contact between films. It is made of polyetheretherketone (PEEK) making it suitable for space use. The cross-sectional area to length ratio of the spacer is 1.0 × 10-5 m with a 10 mm diameter and 4 mm height. The insulation performance is measured with a boil-off calorimeter. Because the NICS MLI blanket can exclude uncertain interlayer contact, the test results showed good agreement with estimations. Furthermore, the NICS MLI blanket shows significantly good insulation performance (effective emissivity is 0.0046 at ordinary temperature), particularly at low temperatures, due to the high thermal resistance of this spacer.
Performance of Several Density Functional Theory Methods on Describing Hydrogen-Bond Interactions.
Rao, Li; Ke, Hongwei; Fu, Gang; Xu, Xin; Yan, Yijing
2009-01-13
We have investigated eleven density functionals, including LDA, PBE, mPWPW91, TPSS, B3LYP, X3LYP, PBE0, O3LYP, B97-1, MPW1K, and TPSSh, for their performances on describing hydrogen bond (HB) interactions. The emphasis has been laid not only on their abilities to calculate the intermolecular hydrogen bonding energies but also on their performances in predicting the relative energies of intermolecular H-bonded complexes and the conformer stabilities due to intramolecular hydrogen bondings. As compared to the best theoretical values, we found that although PBE and PBE0 gave the best estimation of HB strengths, they might fail to predict the correct order of relative HB energies, which might lead to a wrong prediction of the global minimum for different conformers. TPSS and TPSSh did not always improve over PBE and PBE0. B3LYP was found to underestimate the intermolecular HB strengths but was among the best performers in calculating the relative HB energies. We showed here that X3LYP and B97-1 were able to give good values for both absolute HB strengths and relative HB energies, making these functionals good candidates for HB description.
Jung, Kwang-Wook; Yoon, Choon-G; Jang, Jae-Ho; Kong, Dong-Soo
2008-01-01
Effective watershed management often demands qualitative and quantitative predictions of the effect of future management activities as arguments for policy makers and administration. The BASINS geographic information system was developed to compute total maximum daily loads, which are helpful to establish hydrological process and water quality modeling system. In this paper the BASINS toolkit HSPF model is applied in 20,271 km(2) large watershed of the Han River Basin is used for applicability of HSPF and BMPs scenarios. For proper evaluation of watershed and stream water quality, comprehensive estimation methods are necessary to assess large amounts of point source and nonpoint-source (NPS) pollution based on the total watershed area. In this study, The Hydrological Simulation Program-FORTRAN (HSPF) was estimated to simulate watershed pollutant loads containing dam operation and applied BMPs scenarios for control NPS pollution. The 8-day monitoring data (about three years) were used in the calibration and verification processes. Model performance was in the range of "very good" and "good" based on percent difference. The water-quality simulation results were encouraging for this large sizable watershed with dam operation practice and mixed land uses; HSPF proved adequate, and its application is recommended to simulate watershed processes and BMPs evaluation. IWA Publishing 2008.
Mixed effects modelling for glass category estimation from glass refractive indices.
Lucy, David; Zadora, Grzegorz
2011-10-10
520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Water surface modeling from a single viewpoint video.
Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip
2013-07-01
We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.
Recognizing Age-Separated Face Images: Humans and Machines
Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel
2014-01-01
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200
Recognizing age-separated face images: humans and machines.
Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel
2014-01-01
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.
Model fit evaluation in multilevel structural equation models
Ryu, Ehri
2014-01-01
Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882
Tarantilis, Filippos; Athanasakis, Kostas; Zavras, Dimitris; Vozikis, Athanassios; Kyriopoulos, Ioannis
2015-01-01
Objective During the past decades, smoking prevalence in Greece was estimated to be near or over 40%. Following a sharp fall in cigarette consumption, as shown in current data, our objective is to assess smokers’ sensitivity to cigarette price and consumer income changes as well as to project health benefits of an additional tax increase. Methods Cigarette consumption was considered as the dependent variable, with Weighted Average Price as a proxy for cigarette price, gross domestic product as a proxy for consumers’ income and dummy variables reflecting smoking restrictions and antismoking campaigns. Values were computed to natural logarithms and regression was performed. Then, four scenarios of tax increase were distinguished in order to calculate potential health benefits. Results Short-run price elasticity is estimated at −0.441 and short-run income elasticity is estimated at 1.040. Antismoking campaigns were found to have a statistically significant impact on consumption. Results indicate that, depending on the level of tax increase, annual per capita consumption could fall by at least 209.83 cigarettes; tax revenue could rise by more than €0.74 billion, while smokers could be reduced by up to 530 568 and at least 465 smoking-related deaths could be averted. Conclusions Price elasticity estimates are similar to previous studies in Greece, while income elasticity estimates are far greater. With cigarettes regarded as a luxury good, a great opportunity is presented for decisionmakers to counter smoking. Increased taxation, along with focused antismoking campaigns, law reinforcement (to ensure compliance with smoking bans) and intensive control for smuggling could invoke a massive blow to the tobacco epidemic in Greece. PMID:25564137
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
High Pressure Electrolyzer System Evaluation
NASA Technical Reports Server (NTRS)
Prokopius, Kevin; Coloza, Anthony
2010-01-01
This report documents the continuing efforts to evaluate the operational state of a high pressure PEM based electrolyzer located at the NASA Glenn Research Center. This electrolyzer is a prototype system built by General Electric and refurbished by Hamilton Standard (now named Hamilton Sunstrand). It is capable of producing hydrogen and oxygen at an output pressure of 3000 psi. The electrolyzer has been in storage for a number of years. Evaluation and testing was performed to determine the state of the electrolyzer and provide an estimate of the cost for refurbishment. Pressure testing was performed using nitrogen gas through the oxygen ports to ascertain the status of the internal membranes and seals. It was determined that the integrity of the electrolyzer stack was good as there were no appreciable leaks in the membranes or seals within the stack. In addition to the integrity testing, an itemized list and part cost estimate was produced for the components of the electrolyzer system. An evaluation of the system s present state and an estimate of the cost to bring it back to operational status was also produced.
An inexpensive light-scattering particle monitor: field validation
Edwards, Rufus D.; Johnson, Michael; Shields, Kyra Naumoff; Allen, Tracy; Canuz, Eduardo; Smith, Kirk R.
2014-01-01
We have developed a small, light, passive, inexpensive, datalogging particle monitor called the “UCB” (University of California Berkeley particle monitor). Following previously published laboratory assessments, we present here results of tests of its performance in field settings at high particle concentrations. We demonstrate the mass sensitivity of the UCB in relation to gravimetric filter-based PM2.5 mass estimates as well as commercial light-scattering instruments co-located in field chamber tests and in kitchens of wood-burning households. The coefficient of variation of the unadjusted UCB mass response in relation to gravimetric estimates was 15%. Although requiring adjustment for differences in sensitivity, inter-monitor performance was consistently high (r2 > 0.99). Moreover, the UCB can consistently estimate PM2.5 mass concentrations in wood-burning kitchens (Pearson r2 = 0.89; N = 99), with good agreement between duplicate measures (Pearson r2 = 0.94; N = 88). In addition, with appropriate cleaning of the sensing chamber, UCB mass sensitivity does not decrease with time when used intensively in open woodfire kitchens, demonstrating the significant potential of this monitor. PMID:17909644
The shape of ion tracks in natural apatite
NASA Astrophysics Data System (ADS)
Schauries, D.; Afra, B.; Bierschenk, T.; Lang, M.; Rodriguez, M. D.; Trautmann, C.; Li, W.; Ewing, R. C.; Kluth, P.
2014-05-01
Small angle X-ray scattering measurements were performed on natural apatite of different thickness irradiated with 2.2 GeV Au swift heavy ions. The evolution of the track radius along the full ion track length was estimated by considering the electronic energy loss and the velocity of the ions. The shape of the track is nearly cylindrical, slightly widening with a maximum diameter approximately 30 μm before the ions come to rest, followed by a rapid narrowing towards the end within a cigar-like contour. Measurements of average ion track radii in samples of different thicknesses, i.e. containing different sections of the tracks are in good agreement with the shape estimate.
Preliminary Experiments for the Assessment of V/W-band Links for Space-Earth Communications
NASA Technical Reports Server (NTRS)
Nessel, James A.; Acosta, Roberto J.; Miranda, Felix A.
2013-01-01
Since September 2012, NASA Glenn Research Center has deployed a microwave profiling radiometer at White Sands, NM, to estimate atmospheric propagation effects on communications links in the V and W bands (71-86GHz). Estimates of attenuation statistics in the millimeter wave due to gaseous and cloud components of the atmosphere show good agreement with current ITU-R models, but fail to predict link performance in the presence of moderate to heavy rain rates, due to the inherent limitations of passive radiometry. Herein, we discuss the preliminary results of these measurements and describe a design for a terrestrial link experiment to validate/refine existing rain attenuation models in the V/Wbands.
Preliminary Experiments for the Assessment of VW-Band Links for Space-Earth Communications
NASA Technical Reports Server (NTRS)
Nessel, James A.; Acosta, Roberto J.; Miranda, Felix A.
2013-01-01
Since September 2012, NASA Glenn Research Center has deployed a microwave profiling radiometer at White Sands, NM, to estimate atmospheric propagation effects on communications links in the V and W bands (71-86GHz). Estimates of attenuation statistics in the millimeter wave due to gaseous and cloud components of the atmosphere show good agreement with current ITU-R models, but fail to predict link performance in the presence of moderate to heavy rain rates, due to the inherent limitations of passive radiometry. Herein, we discuss the preliminary results of these measurements and describe a design for a terrestrial link experiment to validaterefine existing rain attenuation models in the VW-bands.
Evaluating rainfall kinetic energy - intensity relationships with observed disdrometric data
NASA Astrophysics Data System (ADS)
Angulo-Martinez, Marta; Begueria, Santiago; Latorre, Borja
2016-04-01
Rainfall kinetic energy is required for determining erosivity, the ability of rainfall to detach soil particles and initiate erosion. Its determination relay on the use of disdrometers, i.e. devices capable of measuring the drop size distribution and velocity of falling raindrops. In the absence of such devices, rainfall kinetic energy is usually estimated with empirical expressions relating rainfall energy and intensity. We evaluated the performance of 14 rainfall energy equations in estimating one-minute rainfall energy and event total energy, in comparison with observed data from 821 rainfall episodes (more than 100 thousand one-minute observations) by means of an optical disdrometer. In addition, two sources of bias when using such relationships were evaluated: i) the influence of using theoretical terminal raindrop fall velocities instead of measured values; and ii) the influence of time aggregation (rainfall intensity data every 5-, 10-, 15-, 30-, and 60-minutes). Empirical relationships did a relatively good job when complete events were considered (R2 > 0.82), but offered poorer results for within-event (one-minute resolution) variation. Also, systematic biases where large for many equations. When raindrop size distribution was known, estimating the terminal fall velocities by empirical laws produced good results even at fine time resolution. The influence of time aggregation was very high in the estimated kinetic energy, although linear scaling may allow empirical correction. This results stress the importance of considering all these effects when rainfall energy needs to be estimated from more standard precipitation records. , and recommends the use of disdrometer data to locally determine rainfall kinetic energy.
Kim, Kyungmok; Lee, Jaewook
2016-01-01
This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359
A Comparative Study of Co-Channel Interference Suppression Techniques
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas
1997-01-01
We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.
CFD Predictions for Transonic Performance of the ERA Hybrid Wing-Body Configuration
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Luckring, James M.; McMillin, S. Naomi; Flamm, Jeffrey D.; Roman, Dino
2016-01-01
A computational study was performed for a Hybrid Wing Body configuration that was focused at transonic cruise performance conditions. In the absence of experimental data, two fully independent computational fluid dynamics analyses were conducted to add confidence to the estimated transonic performance predictions. The primary analysis was performed by Boeing with the structured overset-mesh code OVERFLOW. The secondary analysis was performed by NASA Langley Research Center with the unstructured-mesh code USM3D. Both analyses were performed at full-scale flight conditions and included three configurations customary to drag buildup and interference analysis: a powered complete configuration, the configuration with the nacelle/pylon removed, and the powered nacelle in isolation. The results in this paper are focused primarily on transonic performance up to cruise and through drag rise. Comparisons between the CFD results were very good despite some minor geometric differences in the two analyses.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
ERIC Educational Resources Information Center
Occupational Outlook Quarterly, 2012
2012-01-01
The economy's need for workers originates in the demand for the goods and services that these workers provide. So, to project employment, BLS starts by estimating the components of gross domestic product (GDP) for 2020. GDP is the value of the final goods produced and services provided in the United States. Then, BLS estimates the size--in…
NASA Astrophysics Data System (ADS)
Beck, Hylke E.; Vergopolan, Noemi; Pan, Ming; Levizzani, Vincenzo; van Dijk, Albert I. J. M.; Weedon, Graham P.; Brocca, Luca; Pappenberger, Florian; Huffman, George J.; Wood, Eric F.
2017-12-01
We undertook a comprehensive evaluation of 22 gridded (quasi-)global (sub-)daily precipitation (P) datasets for the period 2000-2016. Thirteen non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76 086 gauges worldwide. Another nine gauge-corrected datasets were evaluated using hydrological modeling, by calibrating the HBV conceptual model against streamflow records for each of 9053 small to medium-sized ( < 50 000 km2) catchments worldwide, and comparing the resulting performance. Marked differences in spatio-temporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite- and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR) and the satellite- and reanalysis-based CHIRP V2.0 dataset, the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7) or near-surface soil moisture (SM2RAIN-ASCAT), and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PERSIANN, and PERSIANN-CCS). Two of the three reanalyses (ERA-Interim and JRA-55) unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified, and MSWEP V1.2 and V2.0) generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU), which in turn outperformed the one indirectly incorporating gauge data through another multi-source dataset (PERSIANN-CDR V1R1). Our results highlight large differences in estimation accuracy, and hence the importance of P dataset selection in both research and operational applications. The good performance of MSWEP emphasizes that careful data merging can exploit the complementary strengths of gauge-, satellite-, and reanalysis-based P estimates.
Localization Algorithm with On-line Path Loss Estimation and Node Selection
Bel, Albert; Vicario, José López; Seco-Granados, Gonzalo
2011-01-01
RSS-based localization is considered a low-complexity algorithm with respect to other range techniques such as TOA or AOA. The accuracy of RSS methods depends on the suitability of the propagation models used for the actual propagation conditions. In indoor environments, in particular, it is very difficult to obtain a good propagation model. For that reason, we present a cooperative localization algorithm that dynamically estimates the path loss exponent by using RSS measurements. Since the energy consumption is a key point in sensor networks, we propose a node selection mechanism to limit the number of neighbours of a given node that are used for positioning purposes. Moreover, the selection mechanism is also useful to discard bad links that could negatively affect the performance accuracy. As a result, we derive a practical solution tailored to the strict requirements of sensor networks in terms of complexity, size and cost. We present results based on both computer simulations and real experiments with the Crossbow MICA2 motes showing that the proposed scheme offers a good trade-off in terms of position accuracy and energy efficiency. PMID:22163992
Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.
Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R
2018-06-01
Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.
Tropospheric OH and Cl levels deduced from non-methane hydrocarbon measurements in a marine site
NASA Astrophysics Data System (ADS)
Arsene, C.; Bougiatioti, A.; Kanakidou, M.; Bonsang, B.; Mihalopoulos, N.
2007-09-01
In situ continuous hourly measurements of C2-C8 non-methane hydrocarbons (NMHCS) have been performed from March to October 2006 at two coastal locations (natural and rural) on the island of Crete, in the Eastern Mediterranean. Well defined diel variations were observed for several short lived NMHCS (including ethene, propene, n-butane, n-pentane, n-hexane, 2-methyl-pentane). The daytime concentration of hydroxyl (OH) radicals estimated from these experimental data varied from 1.3×106 to ~4.0×106 radical cm-3, in good agreement with box-model simulations. In addition the relative variability of various hydrocarbon pairs (at least 7) was used to derive the tropospheric levels of Cl atoms. The Cl atom concentration has been estimated to range between 0.6×104 and 4.7×104 atom cm-3, in good agreement with gaseous hydrochloric acid (HCl) observations in the area. Such levels of Cl atoms can be of considerable importance for the oxidation capacity of the troposphere on a regional scale.
NASA Astrophysics Data System (ADS)
Hardesty, R. Michael; Brewer, W. Alan; Sandberg, Scott P.; Weickmann, Ann M.; Shepson, Paul B.; Cambaliza, Maria; Heimburger, Alexie; Davis, Kenneth J.; Lauvaux, Thomas; Miles, Natasha L.; Sarmiento, Daniel P.; Deng, A. J.; Gaudet, Brian; Karion, Anna; Sweeney, Colm; Whetstone, James
2016-06-01
A compact commercial Doppler lidar has been deployed in Indianapolis for two years to measure wind profiles and mixing layer properties as part of project to improve greenhouse measurements from large area sources. The lidar uses vertical velocity variance and aerosol structure to measure mixing layer depth. Comparisons with aircraft and the NOAA HRDL lidar generally indicate good performance, although sensitivity might be an issue under low aerosol conditions.
1990-05-16
Redondo Beach. CA. Civilian subcontractors are ITEK (cameras). Lexington. MA; Contraves Georz (telescopes), Pittsburgh. PA; and Kentron (operations and...Improvements include a higher maximum takeoff weight , improved air-to-air gun sight algorithms, digital flight controls, and improved pilot interface...ambient propagation loss , significant penetration of sea water, and good performance in a nuclear environment. C. (U) JUSTIFICATION FOR PROJECTS LESS
Gómez-Ordóñez, Eva; Jiménez-Escrig, Antonio; Rupérez, Pilar
2012-05-15
Biological properties of polysaccharides from seaweeds are related to their composition and structure. Many factors such as the kind of sugar, type of linkage or sulfate content of algal biopolymers exert an influence in the relationship between structure and function. Besides, the molecular weight (MW) also plays an important role. Thus, a simple, reliable and fast HPSEC method with refractive index detection was developed and optimized for the MW estimation of soluble algal polysaccharides. Chromatogram shape and repeatability of retention time was considerably improved when sodium nitrate was used instead of ultrapure water as mobile phase. Pullulan and dextran standards of different MW were used for method calibration and validation. Also, main polysaccharide standards from brown (alginate, fucoidan, laminaran) and red seaweeds (kappa- and iota-carrageenan) were used for quantification and method precision and accuracy. Relative standard deviation (RSD) of repeatability for retention time, peak areas and inter-day precision was below 0.7%, 2.5% and 2.6%, respectively, which indicated good repeatability and precision. Recoveries (96.3-109.8%) also showed its fairly good accuracy. Regarding linearity, main polysaccharide standards from brown or red seaweeds showed a highly satisfactory correlation coefficient (r>0.999). Moreover, a good sensitivity was shown, with corresponding limits of detection and quantitation in mg/mL of 0.05-0.21 and 0.16-0.31, respectively. The method was applied to the MW estimation of standard algal polysaccharides, as well as to the soluble polysaccharide fractions from the brown seaweed Saccharina latissima and the red Mastocarpus stellatus, respectively. Although distribution of molecular weight was broad, the good repeatability for retention time provided a good precision in MW estimation of polysaccharides. Water- and alkali-soluble fractions from S. latissima ranged from very high (>2400 kDa) to low MW compounds (<6 kDa); this high heterogeneity could be attributable to the complex polysaccharide composition of brown algae. Regarding M. stellatus, sulfated galactans followed a descending order of MW (>1400 kDa to <10 kDa), related to the different solubility of carrageenans in red seaweeds. In summary, the method developed allows for the molecular weight analysis of seaweed polysaccharides with very good precision, accuracy, linearity and sensitivity within a short time. Copyright © 2012 Elsevier B.V. All rights reserved.
Near real-time estimation of burned area using VIIRS 375 m active fire product
NASA Astrophysics Data System (ADS)
Oliva, P.; Schroeder, W.
2016-12-01
Every year, more than 300 million hectares of land burn globally, causing significant ecological and economic consequences, and associated climatological effects as a result of fire emissions. In recent decades, burned area estimates generated from satellite data have provided systematic global information for ecological analysis of fire impacts, climate and carbon cycle models, and fire regimes studies, among many others. However, there is still need of near real-time burned area estimations in order to assess the impacts of fire and estimate smoke and emissions. The enhanced characteristics of the Visible Infrared Imaging Radiometer Suite (VIIRS) 375 m channels on board the Suomi National Polar-orbiting Partnesship (S-NPP) make possible the use of near real-time active fire detection data for burned area estimation. In this study, consecutive VIIRS 375 m active fire detections were aggregated to produce the VIIRS 375 m burned area (BA) estimation over ten ecologically diverse study areas. The accuracy of the BA estimations was assessed by comparison with Landsat-8 supervised burned area classification. The performance of the VIIRS 375 m BA estimates was dependent on the ecosystem characteristics and fire behavior. Higher accuracy was observed in forested areas characterized by large long-duration fires, while grasslands, savannas and agricultural areas showed the highest omission and commission errors. Complementing those analyses, we performed the burned area estimation of the largest fires in Oregon and Washington states during 2015 and the Fort McMurray fire in Canada 2016. The results showed good agreement with NIROPs airborne fire perimeters proving that the VIIRS 375 m BA estimations can be used for near real-time assessments of fire effects.
Regression estimators for generic health-related quality of life and quality-adjusted life years.
Basu, Anirban; Manca, Andrea
2012-01-01
To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.
Assessment of different gridded weather data for soybean yield simulations in Brazil
NASA Astrophysics Data System (ADS)
Battisti, R.; Bender, F. D.; Sentelhas, P. C.
2018-01-01
A high-density, well-distributed, and consistent historical weather data series is of major importance for agricultural planning and climatic risk evaluation. A possible option for regions where weather station network is irregular is the use of gridded weather data (GWD), which can be downloaded online from different sources. Based on that, the aim of this study was to assess the suitability of two GWD, AgMERRA and XAVIER, by comparing them with measured weather data (MWD) for estimating soybean yield in Brazil. The GWD and MWD were obtained for 24 locations across Brazil, considering the period between 1980 and 2010. These data were used to estimate soybean yield with DSSAT-CROPGRO-Soybean model. The comparison of MWD with GWD resulted in a good agreement between climate variables, except for solar radiation. The crop simulations with GWD and MWD resulted in a good agreement for vegetative and reproductive phases. Soybean potential yield (Yp) simulated with AgMERRA and XAVIER had a high correlation (r > 0.88) when compared to the estimates with MWD, with the RMSE of about 400 kg ha-1. For attainable yield (Ya), estimates with XAVIER resulted in a RMSE of 700 kg ha-1 against 864 kg ha-1 from AgMERRA, both compared to the simulations using MWD. Even with these differences in Ya simulations, both GWD can be considered suitable for simulating soybean growth, development, and yield in Brazil; however, with XAVIER GWD presenting a better performance for weather and crop variables assessed.
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
Tree-based flood damage modeling of companies: Damage processes and model performance
NASA Astrophysics Data System (ADS)
Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi
2017-07-01
Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.
de Castro-Filho, Antonio; Lamas, Edgar Stroppa; Meneguz-Moreno, Rafael A; Staico, Rodolfo; Siqueira, Dimytri; Costa, Ricardo A; Braga, Sergio N; Costa, J Ribamar; Chamié, Daniel; Abizaid, Alexandre
2017-06-01
The present study examined the association between Multicenter CTO Registry in Japan (J-CTO) score in predicting failure of percutaneous coronary intervention (PCI) correlating with the estimated duration of chronic total occlusion (CTO). The J-CTO score does not incorporate estimated duration of the occlusion. This was an observational retrospective study that involved all consecutive procedures performed at a single tertiary-care cardiology center between January 2009 and December 2014. A total of 174 patients, median age 59.5 years (interquartile range [IQR], 53-65 years), undergoing CTO-PCI were included. The median estimated occlusion duration was 7.5 months (IQR, 4.0-12.0 months). The lesions were classified as easy (score = 0), intermediate (score = 1), difficult (score = 2), and very difficult (score ≥3) in 51.1%, 33.9%, 9.2%, and 5.7% of the patients, respectively. Failure rate significantly increased with higher J-CTO score (7.9%, 20.3%, 50.0%, and 70.0% in groups with J-CTO scores of 0, 1, 2, and ≥3, respectively; P<.001). There was no significant difference in success rate according to estimated duration of occlusion (P=.63). Indeed, J-CTO score predicted failure of CTO-PCI independently of the estimated occlusion duration (P=.24). Areas under receiver-operating characteristic curves were computed and it was observed that for each occlusion time period, the discriminatory capacity of the J-CTO score in predicting CTO-PCI failure was good, with a C-statistic >0.70. The estimated duration of occlusion had no influence on the J-CTO score performance in predicting failure of PCI in CTO lesions. The probability of failure was mainly determined by grade of lesion complexity.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
A new hydrological model for estimating extreme floods in the Alps
NASA Astrophysics Data System (ADS)
Receanu, R. G.; Hertig, J.-A.; Fallot, J.-M.
2012-04-01
Protection against flooding is very important for a country like Switzerland with a varied topography and many rivers and lakes. Because of the potential danger caused by extreme precipitation, structural and functional safety of large dams must be guaranteed to withstand the passage of an extreme flood. We introduce a new distributed hydrological model to calculate the PMF from a PMP which is spatially and temporally distributed using clouds. This model has permitted the estimation of extreme floods based on the distributed PMP and the taking into account of the specifics of alpine catchments, in particular the small size of the basins, the complex topography, the large lakes, snowmelt and glaciers. This is an important evolution compared to other models described in the literature, as they mainly use a uniform distribution of extreme precipitation all over the watershed. This paper presents the results of calculation with the developed rainfall-runoff model, taking into account measured rainfall and comparing results to observed flood events. This model includes three parts: surface runoff, underground flow and melting snow. Two Swiss watersheds are studied, for which rainfall data and flow rates are available for a considerably long period, including several episodes of heavy rainfall with high flow events. From these events, several simulations are performed to estimate the input model parameters such as soil roughness and average width of rivers in case of surface runoff. Following the same procedure, the parameters used in the underground flow simulation are also estimated indirectly, since direct underground flow and exfiltration measurements are difficult to obtain. A sensitivity analysis of the parameters is performed at the first step to define more precisely the boundary and initial conditions. The results for the two alpine basins, validated with the Nash equation, show a good correlation between the simulated and observed flows. This good correlation shows that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type.
ICU scoring systems allow prediction of patient outcomes and comparison of ICU performance.
Becker, R B; Zimmerman, J E
1996-07-01
Too much time and effort are wasted in attempts to pass final judgment on whether systems for ICU prognostication are "good or bad" and whether they "do or do not" provide a simple answer to the complex and often unpredictable question of individual mortality in the ICU. A substantial amount of data supports the usefulness of general ICU prognostic systems in comparing ICU performance with respect to a wide variety of endpoints, including ICU and hospital mortality, duration of stay, and efficiency of resource use. Work in progress is analyzing both general resource use and specific therapeutic interventions. It also is time to fully acknowledge that statistics never can predict whether a patient will die with 100% accuracy. There always will be exceptions to the rule, and physicians frequently will have information that is not included in prognostic models. In addition, the values of both physicians and patients frequently lead to differences in how a probability in interpreted; for some, a 95% probability estimate means that death is near and, for others, this estimate represents a tangible 5% chance for survival. This means that physicians must learn how to integrate such estimates into their medical decisions. In doing so, it is our hope that prognostic systems are not viewed as oversimplifying or automating clinical decisions. Rather, such systems provide objective data on which physicians may ground a spectrum of decisions regarding either escalation or withdrawal of therapy in critically ill patients. These systems do not dehumanize our decision-making process but, rather, help eliminate physician reliance on emotional, heuristic, poorly calibrated, or overly pessimistic subjective estimates. No decision regarding patient care can be considered best if the facts upon which it is based on imprecise or biased. Future research will improve the accuracy of individual patient predictions but, even with the highest degree of precision, such predictions are useful only in support of, and not as a substitute for, good clinical judgment.
Characterisation of an anthropomorphic chest phantom for dose measurements in radiology beams
NASA Astrophysics Data System (ADS)
Henriques, L. M. S.; Cerqueira, R. A. D.; Santos, W. S.; Pereira, A. J. S.; Rodrigues, T. M. A.; Carvalho Júnior, A. B.; Maia, A. F.
2014-02-01
The objective of this study was to characterise an anthropomorphic chest phantom for dosimetric measurements of conventional radiology beams. This phantom was developed by a previous research project at the Federal University of Sergipe for image quality control tests. As the phantom consists of tissue-equivalent material, it is possible to characterise it for dosimetric studies. For comparison, a geometric chest phantom, consisting of PMMA (polymethylmethacrylate) with dimensions of 30×30×15 cm³ was used. Measurements of incident air kerma (Ki) and entrance surface dose (ESD) were performed using ionisation chambers. From the results, backscatter factors (BSFs) of the two phantoms were determined and compared with values estimated by CALDose_X software, based on a Monte Carlo simulation. For the technical parameters evaluated in this study, the ESD and BSF values obtained experimentally showed a good similarity between the two phantoms, with minimum and maximum difference of 0.2% and 7.0%, respectively, and showed good agreement with the results published in the literature. Organ doses and effective doses for the anthropomorphic phantom were also estimated by the determination of conversion coefficients (CCs) using the visual Monte Carlo (VMC) code. Therefore, the results of this study prove that the anthropomorphic thorax phantom proposed is a good tool to use in dosimetry and can be used for risk evaluation of X-ray diagnostic procedures.
Good Practices in Free-energy Calculations
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher
2013-01-01
As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
The New York Sepsis Severity Score: Development of a Risk-Adjusted Severity Model for Sepsis.
Phillips, Gary S; Osborn, Tiffany M; Terry, Kathleen M; Gesten, Foster; Levy, Mitchell M; Lemeshow, Stanley
2018-05-01
In accordance with Rory's Regulations, hospitals across New York State developed and implemented protocols for sepsis recognition and treatment to reduce variations in evidence informed care and preventable mortality. The New York Department of Health sought to develop a risk assessment model for accurate and standardized hospital mortality comparisons of adult septic patients across institutions using case-mix adjustment. Retrospective evaluation of prospectively collected data. Data from 43,204 severe sepsis and septic shock patients from 179 hospitals across New York State were evaluated. Prospective data were submitted to a database from January 1, 2015, to December 31, 2015. None. Maximum likelihood logistic regression was used to estimate model coefficients used in the New York State risk model. The mortality probability was estimated using a logistic regression model. Variables to be included in the model were determined as part of the model-building process. Interactions between variables were included if they made clinical sense and if their p values were less than 0.05. Model development used a random sample of 90% of available patients and was validated using the remaining 10%. Hosmer-Lemeshow goodness of fit p values were considerably greater than 0.05, suggesting good calibration. Areas under the receiver operator curve in the developmental and validation subsets were 0.770 (95% CI, 0.765-0.775) and 0.773 (95% CI, 0.758-0.787), respectively, indicating good discrimination. Development and validation datasets had similar distributions of estimated mortality probabilities. Mortality increased with rising age, comorbidities, and lactate. The New York Sepsis Severity Score accurately estimated the probability of hospital mortality in severe sepsis and septic shock patients. It performed well with respect to calibration and discrimination. This sepsis-specific model provides an accurate, comprehensive method for standardized mortality comparison of adult patients with severe sepsis and septic shock.
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
du Bois, Roland M; Weycker, Derek; Albera, Carlo; Bradford, Williamson Z; Costabel, Ulrich; Kartashov, Alex; Lancaster, Lisa; Noble, Paul W; Sahn, Steven A; Szwarcberg, Javier; Thomeer, Michiel; Valeyre, Dominique; King, Talmadge E
2011-05-01
The 6-minute-walk test (6MWT) is a practical and clinically meaningful measure of exercise tolerance with favorable performance characteristics in various cardiac and pulmonary diseases. Performance characteristics in patients with idiopathic pulmonary fibrosis (IPF) have not been systematically evaluated. To assess the reliability, validity, and responsiveness of the 6MWT and estimate the minimal clinically important difference (MCID) in patients with IPF. The study population included all subjects completing a 6MWT in a clinical trial evaluating interferon gamma-1b (n = 822). Six-minute walk distance (6MWD) and other parameters were measured at baseline and at 24-week intervals using a standardized protocol. Parametric and distribution-independent correlation coefficients were used to assess the strength of the relationships between 6MWD and measures of pulmonary function, dyspnea, and health-related quality of life. Both distribution-based and anchor-based methods were used to estimate the MCID. Comparison of two proximal measures of 6MWD (mean interval, 24 d) demonstrated good reliability (coefficient = 0.83; P < 0.001). 6MWD was weakly correlated with measures of physiologic function and health-related quality of life; however, values were consistently and significantly lower for patients with the poorest functional status, suggesting good construct validity. Importantly, change in 6MWD was highly predictive of mortality; a 24-week decline of greater than 50 m was associated with a fourfold increase in risk of death at 1 year (hazard ratio, 4.27; 95% confidence interval, 2.57- 7.10; P < 0.001). The estimated MCID was 24-45 m. The 6MWT is a reliable, valid, and responsive measure of disease status and a valid endpoint for clinical trials in IPF.
2008-01-01
Background The determination of genetic variation in sperm competitive ability is fundamental to distinguish between post-copulatory sexual selection models based on good-genes vs compatible genes. The sexy-sperm and the good-sperm hypotheses for the evolution of polyandry require additive (intrinsic) effects of genes influencing sperm competitiveness, whereas the genetic incompatibility hypothesis invokes non-additive genetic effects. A male's sperm competitive ability is typically estimated from his fertilization success, a measure that is dependent on the ability of rival sperm competitors to fertilize the ova. It is well known that fertilization success may be conditional to genotypic interactions among males as well as between males and females. However, the consequences of effects arising from the random sampling of sperm competitors upon the estimation of genetic variance in sperm competitiveness have been overlooked. Here I perform simulations of mating trials performed in the context of sibling analysis to investigate whether the ability to detect additive genetic variance underlying the sperm competitiveness phenotype is hindered by the relative nature of fertilization success measurements. Results Fertilization success values render biased sperm competitive ability values. Furthermore, asymmetries among males in the errors committed when estimating sperm competitive abilities are likely to exist as long as males exhibit variation in sperm competitiveness. Critically, random effects arising from the relative nature of fertilization success lead to an underestimation of underlying additive genetic variance in sperm competitive ability. Conclusion The results show that, regardless of the existence of genotypic interactions affecting the output of sperm competition, fertilization success is not a perfect predictor of sperm competitive ability because of the stochasticity of the background used to obtain fertilization success measures. Random effects need to be considered in the debate over the maintenance of genetic variation in sperm competitiveness, and when testing good-genes and compatible-genes processes as explanations of polyandrous behaviour using repeatability/heritability data in sperm competitive ability. These findings support the notion that the genetic incompatibility hypothesis needs to be treated as an alternative hypothesis, rather than a null hypothesis, in studies that fail to detect intrinsic sire effects on the sperm competitiveness phenotype. PMID:18474087
Integrative missing value estimation for microarray data.
Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine
2006-10-12
Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.
Soybean Crop Area Estimation and Mapping in Mato Grosso State, Brazil
NASA Astrophysics Data System (ADS)
Gusso, A.; Ducati, J. R.
2012-07-01
Evaluation of the MODIS Crop Detection Algorithm (MCDA) procedure for estimating historical planted soybean crop areas was done on fields in Mato Grosso State, Brazil. MCDA is based on temporal profiles of EVI (Enhanced Vegetation Index) derived from satellite data of the MODIS (Moderate Resolution Imaging Spectroradiometer) imager, and was previously developed for soybean area estimation in Rio Grande do Sul State, Brazil. According to the MCDA approach, in Mato Grosso soybean area estimates can be provided in December (1st forecast), using images from the sowing period, and in February (2nd forecast), using images from sowing and maximum crop development period. The results obtained by the MCDA were compared with Brazilian Institute of Geography and Statistics (IBGE) official estimates of soybean area at municipal level. Coefficients of determination were between 0.93 and 0.98, indicating a good agreement, and also the suitability of MCDA to estimations performed in Mato Grosso State. On average, the MCDA results explained 96% of the variation of the data estimated by the IBGE. In this way, MCDA calibration was able to provide annual thematic soybean maps, forecasting the planted area in the State, with results which are comparable to the official agricultural statistics.
Determining wave direction using curvature parameters.
de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista
2016-01-01
The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.
Park, Walter D; Larson, Timothy S; Griffin, Matthew D; Stegall, Mark D
2012-11-15
After the first year after kidney transplantation, 3% to 5% of grafts fail each year but detailed studies of how grafts progress to failure are lacking. This study aimed to analyze the functional stability of kidney transplants between 1 and 5 years after transplantation and to identify initially well-functioning grafts with progressive decline in allograft function. The study included 788 adult conventional kidney transplants performed at the Mayo Clinic Rochester between January 2000 and December 2005 with a minimum graft survival and follow-up of 2.6 years. The modification of diet in renal disease equation for estimating glomerular filtration rate (eGFR(MDRD)) was used to calculate the slope of renal function over time using all available serum creatinine values between 1 and 5 years after transplantation. Most transplants demonstrated good function (eGFR(MDRD) ≥40 mL/min) at 1 year with positive eGFR(MDRD) slope between 1 and 5 years after transplantation. However, a subset of grafts with 1-year eGFR(MDRD) ≥40 mL/min exhibited strongly negative eGFR(MDRD) slope between 1 and 5 years suggestive of progressive loss of graft function. Forty-one percent of this subset reached graft failure during follow-up, accounting for 69% of allograft failures occurring after 2.5 years after transplantation. This pattern of progressive decline in estimated glomerular filtration rate despite good early function was associated with but not fully attributable to factors suggestive of enhanced antidonor immunity. Longitudinal analysis of serial estimated glomerular filtration ratemeasurements identifies initially well-functioning kidney transplants at high risk for subsequent graft loss. For this subset, further studies are needed to identify modifiable causes of functional decline.
Dowling, N Maritza; Bolt, Daniel M; Deng, Sien
2016-12-01
When assessments are primarily used to measure change over time, it is important to evaluate items according to their sensitivity to change, specifically. Items that demonstrate good sensitivity to between-person differences at baseline may not show good sensitivity to change over time, and vice versa. In this study, we applied a longitudinal factor model of change to a widely used cognitive test designed to assess global cognitive status in dementia, and contrasted the relative sensitivity of items to change. Statistically nested models were estimated introducing distinct latent factors related to initial status differences between test-takers and within-person latent change across successive time points of measurement. Models were estimated using all available longitudinal item-level data from the Alzheimer's Disease Assessment Scale-Cognitive subscale, including participants representing the full-spectrum of disease status who were enrolled in the multisite Alzheimer's Disease Neuroimaging Initiative. Five of the 13 Alzheimer's Disease Assessment Scale-Cognitive items demonstrated noticeably higher loadings with respect to sensitivity to change. Attending to performance change on only these 5 items yielded a clearer picture of cognitive decline more consistent with theoretical expectations in comparison to the full 13-item scale. Items that show good psychometric properties in cross-sectional studies are not necessarily the best items at measuring change over time, such as cognitive decline. Applications of the methodological approach described and illustrated in this study can advance our understanding regarding the types of items that best detect fine-grained early pathological changes in cognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
INFERRING THE ECCENTRICITY DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed
2010-12-20
Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less
Acoustic source localization in mixed field using spherical microphone arrays
NASA Astrophysics Data System (ADS)
Huang, Qinghua; Wang, Tong
2014-12-01
Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.
Non-invasive pressure difference estimation from PC-MRI using the work-energy equation
Donati, Fabrizio; Figueroa, C. Alberto; Smith, Nicolas P.; Lamata, Pablo; Nordsletten, David A.
2015-01-01
Pressure difference is an accepted clinical biomarker for cardiovascular disease conditions such as aortic coarctation. Currently, measurements of pressure differences in the clinic rely on invasive techniques (catheterization), prompting development of non-invasive estimates based on blood flow. In this work, we propose a non-invasive estimation procedure deriving pressure difference from the work-energy equation for a Newtonian fluid. Spatial and temporal convergence is demonstrated on in silico Phase Contrast Magnetic Resonance Image (PC-MRI) phantoms with steady and transient flow fields. The method is also tested on an image dataset generated in silico from a 3D patient-specific Computational Fluid Dynamics (CFD) simulation and finally evaluated on a cohort of 9 subjects. The performance is compared to existing approaches based on steady and unsteady Bernoulli formulations as well as the pressure Poisson equation. The new technique shows good accuracy, robustness to noise, and robustness to the image segmentation process, illustrating the potential of this approach for non-invasive pressure difference estimation. PMID:26409245
NASA Astrophysics Data System (ADS)
Liu, Xiaomang; Liu, Changming; Brutsaert, Wilfried
2016-12-01
The performance of a nonlinear formulation of the complementary principle for evaporation estimation was investigated in 241 catchments with different climate conditions in the eastern monsoon region of China. Evaporation (Ea) calculated by the water balance equation was used as the reference. Ea estimated by the calibrated nonlinear formulation was generally in good agreement with the water balance results, especially in relatively dry catchments. The single parameter in the nonlinear formulation, namely αe as a weak analog of the alpha parameter of Priestley and Taylor (), tended to exhibit larger values in warmer and humid near-coastal areas, but smaller values in colder, drier environments inland, with a significant dependency on the aridity index (AI). The nonlinear formulation combined with the equation relating the one parameter and AI provides a promising method to estimate regional Ea with standard and routinely measured meteorological data.
Estimation of pyrethroid pesticide intake using regression ...
Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation of pesticide intakes for a defined demographic community, and (2) comparison of dietary pesticide intakes between the composite and individual samples. Extant databases were useful for assigning individual samples to composites, but they could not provide the breadth of information needed to facilitate measurable levels in every composite. Composite sample measurements were found to be good predictors of pyrethroid pesticide levels in their individual sample constituents where sufficient measurements are available above the method detection limit. Statistical inference shows little evidence of differences between individual and composite measurements and suggests that regression modeling of food groups based on composite dietary samples may provide an effective tool for estimating dietary pesticide intake for a defined population. The research presented in the journal article will improve community's ability to determine exposures through the dietary route with a less burdensome and costly method.
The Beta-Geometric Model Applied to Fecundability in a Sample of Married Women
NASA Astrophysics Data System (ADS)
Adekanmbi, D. B.; Bamiduro, T. A.
2006-10-01
The time required to achieve pregnancy among married couples termed fecundability has been proposed to follow a beta-geometric distribution. The accuracy of the method used in estimating the parameters of the model has an implication on the goodness of fit of the model. In this study, the parameters of the model are estimated using the Method of Moments and Newton-Raphson estimation procedure. The goodness of fit of the model was considered, using estimates from the two methods of estimation, as well as the asymptotic relative efficiency of the estimates. A noticeable improvement in the fit of the model to the data on time to conception was observed, when the parameters are estimated by Newton-Raphson procedure, and thereby estimating reasonable expectations of fecundability for married female population in the country.
Normalization of metabolomics data with applications to correlation maps.
Jauhiainen, Alexandra; Madhu, Basetti; Narita, Masako; Narita, Masashi; Griffiths, John; Tavaré, Simon
2014-08-01
In metabolomics, the goal is to identify and measure the concentrations of different metabolites (small molecules) in a cell or a biological system. The metabolites form an important layer in the complex metabolic network, and the interactions between different metabolites are often of interest. It is crucial to perform proper normalization of metabolomics data, but current methods may not be applicable when estimating interactions in the form of correlations between metabolites. We propose a normalization approach based on a mixed model, with simultaneous estimation of a correlation matrix. We also investigate how the common use of a calibration standard in nuclear magnetic resonance (NMR) experiments affects the estimation of correlations. We show with both real and simulated data that our proposed normalization method is robust and has good performance when discovering true correlations between metabolites. The standardization of NMR data is shown in simulation studies to affect our ability to discover true correlations to a small extent. However, comparing standardized and non-standardized real data does not result in any large differences in correlation estimates. Source code is freely available at https://sourceforge.net/projects/metabnorm/ alexandra.jauhiainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
A hierarchical estimator development for estimation of tire-road friction coefficient
Zhang, Xudong; Göhlich, Dietmar
2017-01-01
The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332
A hierarchical estimator development for estimation of tire-road friction coefficient.
Zhang, Xudong; Göhlich, Dietmar
2017-01-01
The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.
Engineering Design Handbook. Helicopter Engineering. Part Two. Detail Design
1976-01-01
rates are sp-ed for a given amount of power available, involved in both symmetrical and turning maneu- Normally•, the high - speed performance problem...safe mnain rotor specls. cessive oiling should be avoided. Good estimations of The power losses of a typical high - speed twin- gear windage losses F...rotor gearbox and consise.d of two hy- gearbox is pitting or spa,:,iig of the gears and draulic pumps and a high - speed generator. bearinbs (par. 4-2.1
NASA Technical Reports Server (NTRS)
1977-01-01
A slotted waveguide planar array was established as the baseline design for the spaceborne transmitter antenna. Key aspects of efficient energy conversion at both ends of the power transfer link were analyzed and optimized alternate approaches in the areas of antenna and tube design are discussed. An integrated design concept was developed which meets design requirements, observes structural and thermal constraints, exhibits good performance and was developed in adequate depth to permit cost estimating at the subsystem/component level.
A nonparametric smoothing method for assessing GEE models with longitudinal binary data.
Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu
2008-09-30
Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Estimating physiological skin parameters from hyperspectral signatures
NASA Astrophysics Data System (ADS)
Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe
2013-05-01
We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.
Analysis of detection performance of multi band laser beam analyzer
NASA Astrophysics Data System (ADS)
Du, Baolin; Chen, Xiaomei; Hu, Leili
2017-10-01
Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.
Accuracy of visual estimates of joint angle and angular velocity using criterion movements.
Morrison, Craig S; Knudson, Duane; Clayburn, Colby; Haywood, Philip
2005-06-01
A descriptive study to document undergraduate physical education majors' (22.8 +/- 2.4 yr. old) estimates of sagittal plane elbow angle and angular velocity of elbow flexion visually was performed. 42 subjects rated videotape replays of 30 movements organized into three speeds of movement and two criterion elbow angles. Video images of the movements were analyzed with Peak Motus to measure actual values of elbow angles and peak angular velocity. Of the subjects 85.7% had speed ratings significantly correlated with true peak elbow angular velocity in all three angular velocity conditions. Few (16.7%) subjects' ratings of elbow angle correlated significantly with actual angles. Analysis of the subjects with good ratings showed the accuracy of visual ratings was significantly related to speed, with decreasing accuracy for slower speeds of movement. The use of criterion movements did not improve the small percentage of novice observers who could accurately estimate body angles during movement.
Fracture mechanics approach to estimate fatigue lives of welded lap-shear specimens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, Poh -Sang; Pan, Jwo
A full range of stress intensity factor solutions for a kinked crack with finite length is developed as a function of weld width and the sheet thickness. When used with the main crack solutions (global stress intensity factors) in terms of the applied load and the specimen geometric parameters, the fatigue lived of the kinked crack can be estimated for the laser-welded lap-shear specimens. The predicted curve for the load range-fatigue life passes through the cluster of experimental data and is in good agreement. A classical solution associated with an infinitesimal kink is also employed. Furthermore, its life prediction tendsmore » to overestimate the actual fatigue life. In addition, the traditional fatigue life estimation based on structural stress is performed for completeness. As a result, this non-fracture mechanics approach only agrees well with the experimental data under high cyclic load conditions.« less
Fracture mechanics approach to estimate fatigue lives of welded lap-shear specimens
Lam, Poh -Sang; Pan, Jwo
2015-06-29
A full range of stress intensity factor solutions for a kinked crack with finite length is developed as a function of weld width and the sheet thickness. When used with the main crack solutions (global stress intensity factors) in terms of the applied load and the specimen geometric parameters, the fatigue lived of the kinked crack can be estimated for the laser-welded lap-shear specimens. The predicted curve for the load range-fatigue life passes through the cluster of experimental data and is in good agreement. A classical solution associated with an infinitesimal kink is also employed. Furthermore, its life prediction tendsmore » to overestimate the actual fatigue life. In addition, the traditional fatigue life estimation based on structural stress is performed for completeness. As a result, this non-fracture mechanics approach only agrees well with the experimental data under high cyclic load conditions.« less
Diagnostic performance of a Lattice Boltzmann-based method for CT-based fractional flow reserve.
Giannopoulos, Andreas A; Tang, Anji; Ge, Yin; Cheezum, Michael K; Steigner, Michael L; Fujimoto, Shinichiro; Kumamaru, Kanako K; Chiappino, Dante; Della Latta, Daniele; Berti, Sergio; Chiappino, Sara; Rybicki, Frank J; Melchionna, Simone; Mitsouras, Dimitrios
2018-02-20
Fractional flow reserve (FFR) estimated from coronary computed tomography angiography (CT-FFR) offers non-invasive detection of lesion-specific ischaemia. We aimed to develop and validate a fast CT-FFR algorithm utilising the Lattice Boltzmann method for blood flow simulation (LBM CT-FFR). Sixty-four patients with clinically indicated CTA and invasive FFR measurement from three institutions were retrospectively analysed. CT-FFR was performed using an onsite tool interfacing with a commercial Lattice Boltzmann fluid dynamics cloud-based platform. Diagnostic accuracy of LBM CT-FFR ≤0.8 and percent diameter stenosis >50% by CTA to detect invasive FFR ≤0.8 were compared using area under the receiver operating characteristic curve (AUC). Sixty patients successfully underwent LBM CT-FFR analysis; 29 of 73 lesions in 69 vessels had invasive FFR ≤0.8. Total time to perform LBM CT-FFR was 40±10 min. Compared to invasive FFR, LBM CT-FFR had good correlation (r=0.64), small bias (0.009) and good limits of agreement (-0.223 to 0.206). The AUC of LBM CT-FFR (AUC=0.894, 95% confidence interval [CI]: 0.792-0.996) was significantly higher than CTA (AUC=0.685, 95% CI: 0.576-0.794) to detect FFR ≤0.8 (p=0.0021). Per-lesion specificity, sensitivity, and accuracy of LBM CT-FFR were 97.7%, 79.3%, and 90.4%, respectively. LBM CT-FFR has very good diagnostic accuracy to detect lesion-specific ischaemia (FFR ≤0.8) and can be performed in less than one hour.
Lanctot, Richard B.; Hatch, Shyla A.; Gill, Verena A.; Eens, Marcel
2003-01-01
We evaluated the use of corticosterone to gauge forage availability and predict reproductive performance in black-legged kittiwakes (Rissa tridactyla) breeding in Alaska during 1999 and 2000. We modeled the relationship between baseline levels of corticosterone and a suite of individual and temporal characteristics of the sampled birds. We also provided supplemental food to a sample of pairs and compared their corticosterone levels with that of pairs that were not fed. Corticosterone levels were a good predictor of forage availability in some situations, although inconsistencies between corticosterone levels and reproductive performance of fed and unfed kittiwakes suggested that this was not always the case. In general, higher corticosterone levels were found in birds that lacked breeding experience and in birds sampled shortly after arriving from their wintering grounds. All parameters investigated, however, explained only a small proportion of the variance in corticosterone levels. We also investigated whether corticosterone, supplemental feeding, year of the study, breeding experience, body weight, and sex of a bird were able to predict laying, hatching, and fledging success in kittiwakes. Here, breeding experience, year of the study, and body weight were the best predictors of a bird’s performance. Corticosterone level and supplemental feeding were good predictors of kittiwake reproductive performance in some cases. For example, corticosterone levels of birds sampled during the arrival stage reliably predicted laying success, but were less reliable at predicting hatching and fledging success. Counts of active nests with eggs or chicks may be more reliable estimates of the actual productivity of the colony. Supplemental feeding had strong effects on kittiwake productivity when natural forage was poor, but had little effect when natural forage was plentiful.
Good social skills despite poor theory of mind: exploring compensation in autism spectrum disorder.
Livingston, Lucy Anne; Colvert, Emma; Bolton, Patrick; Happé, Francesca
2018-03-26
It is proposed that some individuals with Autism Spectrum Disorder (ASD) can 'compensate' for their underlying difficulties (e.g. in theory of mind; ToM), thus demonstrating relatively few behavioural symptoms, despite continued core cognitive deficits. The mechanisms underpinning compensation are largely unexplored, as is its potential impact on mental health. This study aimed to estimate compensation patterns in ASD, by contrasting overt social behaviour with ToM task performance, in order to compare the characteristics of 'Low' and 'High' Compensators. A total of 136 autistic adolescents, from the ongoing Social Relationships Study, completed a range of cognitive tasks, the Autistic Diagnostic Observation Schedule (ADOS) and a self-report anxiety questionnaire. Participants were assigned compensation group status; High Compensators demonstrated good ADOS scores despite poor ToM performance, while Low Compensators demonstrated similarly poor ToM, accompanied by poor ADOS scores. High Compensators demonstrated better IQ and executive function (EF), but greater self-reported anxiety, compared with Low Compensators. Such differences were not found when comparing individuals who had good versus poor ADOS scores, when ToM performance was good. Other core autistic characteristics (weak central coherence, nonsocial symptoms) did not differentiate the High and Low Compensators. IQ, EF and anxiety appear to be implicated in the processes by which certain autistic young people can compensate for their underlying ToM difficulties. This tendency to compensate does not appear to reflect the severity of 'hit' for ASD per se, suggesting that well-compensated individuals are not experiencing a milder form of ASD. The construct of compensation in ASD has implications for research and clinical practice. © 2018 The Authors. Journal of Child Psychology and Psychiatry published by John Wiley & Sons Ltd on behalf of Association for Child and Adolescent Mental Health.
Rainfall estimation for real time flood monitoring using geostationary meteorological satellite data
NASA Astrophysics Data System (ADS)
Veerakachen, Watcharee; Raksapatcharawong, Mongkol
2015-09-01
Rainfall estimation by geostationary meteorological satellite data provides good spatial and temporal resolutions. This is advantageous for real time flood monitoring and warning systems. However, a rainfall estimation algorithm developed in one region needs to be adjusted for another climatic region. This work proposes computationally-efficient rainfall estimation algorithms based on an Infrared Threshold Rainfall (ITR) method calibrated with regional ground truth. Hourly rain gauge data collected from 70 stations around the Chao-Phraya river basin were used for calibration and validation of the algorithms. The algorithm inputs were derived from FY-2E satellite observations consisting of infrared and water vapor imagery. The results were compared with the Global Satellite Mapping of Precipitation (GSMaP) near real time product (GSMaP_NRT) using the probability of detection (POD), root mean square error (RMSE) and linear correlation coefficient (CC) as performance indices. Comparison with the GSMaP_NRT product for real time monitoring purpose shows that hourly rain estimates from the proposed algorithm with the error adjustment technique (ITR_EA) offers higher POD and approximately the same RMSE and CC with less data latency.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision
Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz
2016-01-01
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424
Langdon, Jonathan H; Elegbe, Etana; McAleavey, Stephen A
2015-01-01
Single Tracking Location (STL) Shear wave Elasticity Imaging (SWEI) is a method for detecting elastic differences between tissues. It has the advantage of intrinsic speckle bias suppression compared to Multiple Tracking Location (MTL) variants of SWEI. However, the assumption of a linear model leads to an overestimation of the shear modulus in viscoelastic media. A new reconstruction technique denoted Single Tracking Location Viscosity Estimation (STL-VE) is introduced to correct for this overestimation. This technique utilizes the same raw data generated in STL-SWEI imaging. Here, the STL-VE technique is developed by way of a Maximum Likelihood Estimation (MLE) for general viscoelastic materials. The method is then implemented for the particular case of the Kelvin-Voigt Model. Using simulation data, the STL-VE technique is demonstrated and the performance of the estimator is characterized. Finally, the STL-VE method is used to estimate the viscoelastic parameters of ex-vivo bovine liver. We find good agreement between the STL-VE results and the simulation parameters as well as between the liver shear wave data and the modeled data fit. PMID:26168170
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
2018-01-01
We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129
Heavy baryons in the large N c limit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albertus, C.; Ruiz Arriola, Enrique; Fernando, Ishara P.
It is shown that in the large N c limit heavy baryon masses can be estimated quantitatively in a 1/N c expansion using the Hartree approximation. The results are compared with available lattice calculations for different values of the ratio between the square root of the string tension and the heavy quark mass tension independent of N c. Using a potential adjusted to agree with the one obtained in lattice QCD, a variational analysis of the ground state spin averaged baryon mass is performed using Gaussian Hartree wave functions. Relativistic corrections through the quark kinetic energy are included. Lastly, themore » results provide good estimates for the first sub-leading in 1/N c corrections.« less
Heavy baryons in the large N c limit
Albertus, C.; Ruiz Arriola, Enrique; Fernando, Ishara P.; ...
2015-09-16
It is shown that in the large N c limit heavy baryon masses can be estimated quantitatively in a 1/N c expansion using the Hartree approximation. The results are compared with available lattice calculations for different values of the ratio between the square root of the string tension and the heavy quark mass tension independent of N c. Using a potential adjusted to agree with the one obtained in lattice QCD, a variational analysis of the ground state spin averaged baryon mass is performed using Gaussian Hartree wave functions. Relativistic corrections through the quark kinetic energy are included. Lastly, themore » results provide good estimates for the first sub-leading in 1/N c corrections.« less
Kinetic modeling of the photocatalytic degradation of clofibric acid in a slurry reactor.
Manassero, Agustina; Satuf, María Lucila; Alfano, Orlando Mario
2015-01-01
A kinetic study of the photocatalytic degradation of the pharmaceutical clofibric acid is presented. Experiments were carried out under UV radiation employing titanium dioxide in water suspension. The main reaction intermediates were identified and quantified. Intrinsic expressions to represent the kinetics of clofibric acid and the main intermediates were derived. The modeling of the radiation field in the reactor was carried out by Monte Carlo simulation. Experimental runs were performed by varying the catalyst concentration and the incident radiation. Kinetic parameters were estimated from the experiments by applying a non-linear regression procedure. Good agreement was obtained between model predictions and experimental data, with an error of 5.9 % in the estimations of the primary pollutant concentration.
On the scaling of the distribution of daily price fluctuations in the Mexican financial market index
NASA Astrophysics Data System (ADS)
Alfonso, Léster; Mansilla, Ricardo; Terrero-Escalante, César A.
2012-05-01
In this paper, a statistical analysis of log-return fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of daily data covering the period from 04/09/2000-04/09/2010 was analyzed, and fitted to different distributions. Tests of the goodness of fit were performed in order to quantitatively asses the quality of the estimation. Special attention was paid to the impact of the size of the sample on the estimated decay of the distributions tail. In this study a forceful rejection of normality was obtained. On the other hand, the null hypothesis that the log-fluctuations are fitted to a α-stable Lévy distribution cannot be rejected at the 5% significance level.
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly
2016-12-01
To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-09-12
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-01-01
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs. PMID:27626429
Effect of the precipitation interpolation method on the performance of a snowmelt runoff model
NASA Astrophysics Data System (ADS)
Jacquin, Alexandra
2014-05-01
Uncertainties on the spatial distribution of precipitation seriously affect the reliability of the discharge estimates produced by watershed models. Although there is abundant research evaluating the goodness of fit of precipitation estimates obtained with different gauge interpolation methods, few studies have focused on the influence of the interpolation strategy on the response of watershed models. The relevance of this choice may be even greater in the case of mountain catchments, because of the influence of orography on precipitation. This study evaluates the effect of the precipitation interpolation method on the performance of conceptual type snowmelt runoff models. The HBV Light model version 4.0.0.2, operating at daily time steps, is used as a case study. The model is applied in Aconcagua at Chacabuquito catchment, located in the Andes Mountains of Central Chile. The catchment's area is 2110[Km2] and elevation ranges from 950[m.a.s.l.] to 5930[m.a.s.l.] The local meteorological network is sparse, with all precipitation gauges located below 3000[m.a.s.l.] Precipitation amounts corresponding to different elevation zones are estimated through areal averaging of precipitation fields interpolated from gauge data. Interpolation methods applied include kriging with external drift (KED), optimal interpolation method (OIM), Thiessen polygons (TP), multiquadratic functions fitting (MFF) and inverse distance weighting (IDW). Both KED and OIM are able to account for the existence of a spatial trend in the expectation of precipitation. By contrast, TP, MFF and IDW, traditional methods widely used in engineering hydrology, cannot explicitly incorporate this information. Preliminary analysis confirmed that these methods notably underestimate precipitation in the study catchment, while KED and OIM are able to reduce the bias; this analysis also revealed that OIM provides more reliable estimations than KED in this region. Using input precipitation obtained by each method, HBV parameters are calibrated with respect to Nash-Sutcliffe efficiency. The performance of HBV in the study catchment is not satisfactory. Although volumetric errors are modest, efficiency values are lower than 70%. Discharge estimates resulting from the application of TP, MFF and IDW obtain similar model efficiencies and volumetric errors. These error statistics moderately improve if KED or OIM are used instead. Even though the quality of precipitation estimates of distinct interpolation methods is dissimilar, the results of this study show that these differences do not necessarily produce noticeable changes in HBV's model performance statistics. This situation arises because the calibration of the model parameters allows some degree of compensation of deficient areal precipitation estimates, mainly through the adjustment of model simulated evaporation and glacier melt, as revealed by the analysis of water balances. In general, even if there is a good agreement between model estimated and observed discharge, this information is not sufficient to assert that the internal hydrological processes of the catchment are properly simulated by a watershed model. Other calibration criteria should be incorporated if a more reliable representation of these processes is desired. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279. The HBV Light software used in this study was kindly provided by J. Seibert, Department of Geography, University of Zürich.
Variations in pulmonary artery occlusion pressure to estimate changes in pleural pressure.
Bellemare, Patrick; Goldberg, Peter; Magder, Sheldon A
2007-11-01
A readily available assessment of changes in pleural pressure would be useful for ventilator and fluid management in critically ill patients. We examined whether changes in pulmonary artery occlusion pressure (Ppao) adequately reflect respiratory changes in pleural pressure as assessed by changes in intraesophageal balloon pressure (Peso). We studied patients who had a pulmonary catheter and esophageal balloon surrounding a nasogastric tube as part of their care (n=24). We compared changes in Ppao (dPpao) to changes in Peso (dPeso) by Bland-Altman and regression analysis. Adequacy of balloon placement was assessed by performing Mueller maneuvers and adjusting the position to achieve a ratio of dPeso to change in tracheal pressure (dPtr) of 0.85 or higher. This was achieved in only 14 of the 24 subjects. We also compared dCVP to dPeso. The dPpao during spontaneous breaths and positive pressure breaths gave a good estimate of Peso but generally underestimated dPeso (bias=2.2 +8.2 and -3.9 cmH2O for the whole group). The dCVP was not as good a predictor (bias=2.9 +10.3 and -4.6). In patients who have a pulmonary artery catheter in place dPpao gives a lower estimate of changes in pleural pressure and may be more reliable than dPeso. The dCVP is a less reliable predictor than changes in pleural pressure.
Aspartic acid racemisation in purified elastin from arteries as basis for age estimation.
Dobberstein, R C; Tung, S-M; Ritz-Timme, S
2010-07-01
Aspartic acid racemisation (AAR) results in an age-dependent accumulation of D: -aspartic acid in durable human proteins and can be used as a basis for age estimation. Routinely, age estimation based on AAR is performed by analysis of dentine. However, in forensic practise, teeth are not always available. Non-dental tissues for age estimation may be suitable for age estimation based on AAR if they contain durable proteins that can be purified and analysed. Elastin is such a durable protein. To clarify if purified elastin from arteries is a suitable sample for biochemical age estimation, AAR was determined in purified elastin from arteries from individuals of known age (n = 68 individuals, including n = 15 putrefied corpses), considering the influence of different stages of atherosclerosis and putrefaction on the AAR values. AAR was found to increase with age. The relationship between AAR and age was good enough to serve as basis for age estimation, but worse than known from dentinal proteins. Intravital and post-mortem degradation of elastin may have a moderate effect on the AAR values. Age estimation based on AAR in purified elastin from arteries may be a valuable additional tool in the identification of unidentified cadavers, especially in cases where other methods cannot be applied (e.g., no available teeth and body parts).
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk
2017-12-21
The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US/MR fusion.
Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm
NASA Astrophysics Data System (ADS)
Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd
2018-04-01
Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.
Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study
NASA Technical Reports Server (NTRS)
Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.
2010-01-01
This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.
Ai, Qingsong; Zhu, Chengxiang; Zuo, Jie; Liu, Quan; Xie, Sheng Q.; Yang, Ming
2017-01-01
A rehabilitation robot plays an important role in relieving the therapists’ burden and helping patients with ankle injuries to perform more accurate and effective rehabilitation training. However, a majority of current ankle rehabilitation robots are rigid and have drawbacks in terms of complex structure, poor flexibility and lack of safety. Taking advantages of pneumatic muscles’ good flexibility and light weight, we developed a novel two degrees of freedom (2-DOF) parallel compliant ankle rehabilitation robot actuated by pneumatic muscles (PMs). To solve the PM’s nonlinear characteristics during operation and to tackle the human-robot uncertainties in rehabilitation, an adaptive backstepping sliding mode control (ABS-SMC) method is proposed in this paper. The human-robot external disturbance can be estimated by an observer, who is then used to adjust the robot output to accommodate external changes. The system stability is guaranteed by the Lyapunov stability theorem. Experimental results on the compliant ankle rehabilitation robot show that the proposed ABS-SMC is able to estimate the external disturbance online and adjust the control output in real time during operation, resulting in a higher trajectory tracking accuracy and better response performance especially in dynamic conditions. PMID:29283406
Ai, Qingsong; Zhu, Chengxiang; Zuo, Jie; Meng, Wei; Liu, Quan; Xie, Sheng Q; Yang, Ming
2017-12-28
A rehabilitation robot plays an important role in relieving the therapists' burden and helping patients with ankle injuries to perform more accurate and effective rehabilitation training. However, a majority of current ankle rehabilitation robots are rigid and have drawbacks in terms of complex structure, poor flexibility and lack of safety. Taking advantages of pneumatic muscles' good flexibility and light weight, we developed a novel two degrees of freedom (2-DOF) parallel compliant ankle rehabilitation robot actuated by pneumatic muscles (PMs). To solve the PM's nonlinear characteristics during operation and to tackle the human-robot uncertainties in rehabilitation, an adaptive backstepping sliding mode control (ABS-SMC) method is proposed in this paper. The human-robot external disturbance can be estimated by an observer, who is then used to adjust the robot output to accommodate external changes. The system stability is guaranteed by the Lyapunov stability theorem. Experimental results on the compliant ankle rehabilitation robot show that the proposed ABS-SMC is able to estimate the external disturbance online and adjust the control output in real time during operation, resulting in a higher trajectory tracking accuracy and better response performance especially in dynamic conditions.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Which response format reveals the truth about donations to a public good?
Thomas C. Brown; Patricia A. Champ; Richard C. Bishop; Daniel W. McCollum
1996-01-01
Seceral contingent valuation studies hace found that the open-ended format yields lower estimates of willingness to pay (WTP) than does the closed-ended, or dichotomous choice, format. In this study, WTP for a public encironmental good was estimated under four conditions: actual payment in response to open-ended and closed-ended requests, and hypothetical payment in...
Van Ael, Evy; De Cooman, Ward; Blust, Ronny; Bervoets, Lieven
2015-01-01
Large datasets from total and dissolved metal concentrations in Flemish (Belgium) fresh water systems and the associated macroinvertebrate-based biotic index MMIF (Multimetric Macroinvertebrate Index Flanders) were used to estimate critical metal concentrations for good ecological water quality, as imposed by the European Water Framework Directive (2000). The contribution of different stressors (metals and water characteristics) to the MMIF were studied by constructing generalized linear mixed effect models. Comparison between estimated critical concentrations and the European and Flemish EQS, shows that the EQS for As, Cd, Cu and Zn seem to be sufficient to reach a good ecological quality status as expressed by the invertebrate-based biotic index. In contrast, the EQS for Cr, Hg and Pb are higher than the estimated critical concentrations, which suggests that when environmental concentrations are at the same level as the EQS a good quality status might not be reached. The construction of mixed models that included metal concentrations in their structure did not lead to a significant outcome. However, mixed models showed the primary importance of water characteristics (oxygen level, temperature, ammonium concentration and conductivity) for the MMIF. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yajima, Airi; Uesawa, Yoshihiro; Ogawa, Chiaki; Yatabe, Megumi; Kondo, Naoki; Saito, Shinichiro; Suzuki, Yoshihiko; Atsuda, Kouichiro; Kagaya, Hajime
2015-05-01
There exist various useful predictive models, such as the Cockcroft-Gault model, for estimating creatinine clearance (CLcr). However, the prediction of renal function is difficult in patients with cancer treated with cisplatin. Therefore, we attempted to construct a new model for predicting CLcr in such patients. Japanese patients with head and neck cancer who had received cisplatin-based chemotherapy were used as subjects. A multiple regression equation was constructed as a model for predicting CLcr values based on background and laboratory data. A model for predicting CLcr, which included body surface area, serum creatinine and albumin, was constructed. The model exhibited good performance prior to cisplatin therapy. In addition, it performed better than previously reported models after cisplatin therapy. The predictive model constructed in the present study displayed excellent potential and was useful for estimating the renal function of patients treated with cisplatin therapy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Pak, S I; Chang, K S
2006-12-01
A Venturi scrubber has dispersed three-phase flow of gas, dust, and liquid. Atomization of a liquid jet and interaction between the phases has a large effect on the performance of Venturi scrubbers. In this study, a computational model for the interactive three-phase flow in a Venturi scrubber has been developed to estimate pressure drop and collection efficiency. The Eulerian-Lagrangian method is used to solve the model numerically. Gas flow is solved using the Eulerian approach by using the Navier-Stokes equations, and the motion of dust and liquid droplets, described by the Basset-Boussinesq-Oseen (B-B-O) equation, is solved using the Lagrangian approach. This model includes interaction between gas and droplets, atomization of a liquid jet, droplet deformation, breakup and collision of droplets, and capture of dust by droplets. A circular Pease-Anthony Venturi scrubber was simulated numerically with this new model. The numerical results were compared with earlier experimental data for pressure drop and collection efficiency, and gave good agreements.
The Future of Drought in the Southeastern U.S.: Projections from downscaled CMIP5 models
NASA Astrophysics Data System (ADS)
Keellings, D.; Engstrom, J.
2017-12-01
The Southeastern U.S. has been repeatedly impacted by severe droughts that have affected the environment and economy of the region. In this study the ability of 32 downscaled CMIP5 models, bias corrected using localized constructed analogs (LOCA), to simulate historical observations of dry spells from 1950-2005 are assessed using Perkins skill scores and significance tests. The models generally simulate the distribution of dry days well but there are significant differences between the ability of the best and worst performing models, particularly when it comes to the upper tail of the distribution. The best and worst performing models are then projected through 2099, using RCP 4.5 and 8.5, and estimates of 20 year return periods are compared. Only the higher skill models provide a good estimate of extreme dry spell lengths with simulations of 20 year return values within ± 5 days of observed values across the region. Projected return values differ by model grouping, but all models exhibit significant increases.
Estimation of daily flow rate of photovoltaic water pumping systems using solar radiation data
NASA Astrophysics Data System (ADS)
Benghanem, M.; Daffallah, K. O.; Almohammedi, A.
2018-03-01
This paper presents a simple model which allows us to contribute in the studies of photovoltaic (PV) water pumping systems sizing. The nonlinear relation between water flow rate and solar power has been obtained experimentally in a first step and then used for performance prediction. The model proposed enables us to simulate the water flow rate using solar radiation data for different heads (50 m, 60 m, 70 m and 80 m) and for 8S × 3P PV array configuration. The experimental data are obtained with our pumping test facility located at Madinah site (Saudi Arabia). The performances are calculated using the measured solar radiation data of different locations in Saudi Arabia. Knowing the solar radiation data, we have estimated with a good precision the water flow rate Q in five locations (Al-Jouf, Solar Village, AL-Ahsa, Madinah and Gizan) in Saudi Arabia. The flow rate Q increases with the increase of pump power for different heads following the nonlinear model proposed.
Using the power balance model to simulate cross-country skiing on varying terrain.
Moxnes, John F; Sandbakk, Oyvind; Hausken, Kjell
2014-01-01
The current study adapts the power balance model to simulate cross-country skiing on varying terrain. We assumed that the skier's locomotive power at a self-chosen pace is a function of speed, which is impacted by friction, incline, air drag, and mass. An elite male skier's position along the track during ski skating was simulated and compared with his experimental data. As input values in the model, air drag and friction were estimated from the literature based on the skier's mass, snow conditions, and speed. We regard the fit as good, since the difference in racing time between simulations and measurements was 2 seconds of the 815 seconds racing time, with acceptable fit both in uphill and downhill terrain. Using this model, we estimated the influence of changes in various factors such as air drag, friction, and body mass on performance. In conclusion, the power balance model with locomotive power as a function of speed was found to be a valid tool for analyzing performance in cross-country skiing.
Gear crack propagation investigations
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Ballarini, Roberto
1996-01-01
Analytical and experimental studies were performed to investigate the effect of gear rim thickness on crack propagation life. The FRANC (FRacture ANalysis Code) computer program was used to simulate crack propagation. The FRANC program used principles of linear elastic fracture mechanics, finite element modeling, and a unique re-meshing scheme to determine crack tip stress distributions, estimate stress intensity factors, and model crack propagation. Various fatigue crack growth models were used to estimate crack propagation life based on the calculated stress intensity factors. Experimental tests were performed in a gear fatigue rig to validate predicted crack propagation results. Test gears were installed with special crack propagation gages in the tooth fillet region to measure bending fatigue crack growth. Good correlation between predicted and measured crack growth was achieved when the fatigue crack closure concept was introduced into the analysis. As the gear rim thickness decreased, the compressive cyclic stress in the gear tooth fillet region increased. This retarded crack growth and increased the number of crack propagation cycles to failure.
Wang, Yang; Li, Mingxing; Tu, Z C; Hernández, A Calvo; Roco, J M M
2012-07-01
The figure of merit for refrigerators performing finite-time Carnot-like cycles between two reservoirs at temperature T(h) and T(c) (
Recent Progress Towards Predicting Aircraft Ground Handling Performance
NASA Technical Reports Server (NTRS)
Yager, T. J.; White, E. J.
1981-01-01
The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-03
... requirements of FERC-716 (``Good Faith Request for Transmission Service and Response by Transmitting Utility..., provide standards by which the Commission determines if and when a valid good faith request for... 12 components of a good faith estimate and 5 components of a reply to a good faith request. Action...
Rediscovery of Good-Turing estimators via Bayesian nonparametrics.
Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye
2016-03-01
The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. © 2015, The International Biometric Society.
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
NASA Technical Reports Server (NTRS)
Moisan, John R.; Moisan, Tiffany A. H.; Linkswiler, Matthew A.
2011-01-01
Phytoplankton absorption spectra and High-Performance Liquid Chromatography (HPLC) pigment observations from the Eastern U.S. and global observations from NASA's SeaBASS archive are used in a linear inverse calculation to extract pigment-specific absorption spectra. Using these pigment-specific absorption spectra to reconstruct the phytoplankton absorption spectra results in high correlations at all visible wavelengths (r(sup 2) from 0.83 to 0.98), and linear regressions (slopes ranging from 0.8 to 1.1). Higher correlations (r(sup 2) from 0.75 to 1.00) are obtained in the visible portion of the spectra when the total phytoplankton absorption spectra are unpackaged by multiplying the entire spectra by a factor that sets the total absorption at 675 nm to that expected from absorption spectra reconstruction using measured pigment concentrations and laboratory-derived pigment-specific absorption spectra. The derived pigment-specific absorption spectra were further used with the total phytoplankton absorption spectra in a second linear inverse calculation to estimate the various phytoplankton HPLC pigments. A comparison between the estimated and measured pigment concentrations for the 18 pigment fields showed good correlations (r(sup 2) greater than 0.5) for 7 pigments and very good correlations (r(sup 2) greater than 0.7) for chlorophyll a and fucoxanthin. Higher correlations result when the analysis is carried out at more local geographic scales. The ability to estimate phytoplankton pigments using pigment-specific absorption spectra is critical for using hyperspectral inverse models to retrieve phytoplankton pigment concentrations and other Inherent Optical Properties (IOPs) from passive remote sensing observations.
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
Gu, Yingxin; Wylie, Bruce K.; Howard, Daniel M.; Phuyal, Khem P.; Ji, Lei
2013-01-01
In this study, we developed a new approach that adjusted normalized difference vegetation index (NDVI) pixel values that were near saturation to better characterize the cropland performance (CP) in the Greater Platte River Basin (GPRB), USA. The relationship between NDVI and the ratio vegetation index (RVI) at high NDVI values was investigated, and an empirical equation for estimating saturation-adjusted NDVI (NDVIsat_adjust) based on RVI was developed. A 10-year (2000–2009) NDVIsat_adjust data set was developed using 250-m 7-day composite historical eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. The growing season averaged NDVI (GSN), which is a proxy for ecosystem performance, was estimated and long-term NDVI non-saturation- and saturation-adjusted cropland performance (CPnon_sat_adjust, CPsat_adjust) maps were produced over the GPRB. The final CP maps were validated using National Agricultural Statistics Service (NASS) crop yield data. The relationship between CPsat_adjust and the NASS average corn yield data (r = 0.78, 113 samples) is stronger than the relationship between CPnon_sat_adjust and the NASS average corn yield data (r = 0.67, 113 samples), indicating that the new CPsat_adjust map reduces the NDVI saturation effects and is in good agreement with the corn yield ground observations. Results demonstrate that the NDVI saturation adjustment approach improves the quality of the original GSN map and better depicts the actual vegetation conditions of the GPRB cropland systems.
Stochastic rainfall synthesis for urban applications using different regionalization methods
NASA Astrophysics Data System (ADS)
Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.
2017-12-01
The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.
Combining Neural Networks with Existing Methods to Estimate 1 in 100-Year Flood Event Magnitudes
NASA Astrophysics Data System (ADS)
Newson, A.; See, L.
2005-12-01
Over the last fifteen years artificial neural networks (ANN) have been shown to be advantageous for the solution of many hydrological modelling problems. The use of ANNs for flood magnitude estimation in ungauged catchments, however, is a relatively new and under researched area. In this paper ANNs are used to make estimates of the magnitude of the 100-year flood event (Q100) for a number of ungauged catchments. The data used in this study were provided by the Centre for Ecology and Hydrology's Flood Estimation Handbook (FEH), which contains information on catchments across the UK. Sixteen catchment descriptors for 719 catchments were used to train an ANN, which was split into a training, validation and test data set. The goodness-of-fit statistics on the test data set indicated good model performance, with an r-squared value of 0.8 and a coefficient of efficiency of 79 percent. Data for twelve ungauged catchments were then put through the trained ANN to produce estimates of Q100. Two other accepted methodologies were also employed: the FEH statistical method and the FSR (Flood Studies Report) design storm technique, both of which are used to produce flood frequency estimates. The advantage of developing an ANN model is that it provides a third figure to aid a hydrologist in making an accurate estimate. For six of the twelve catchments, there was a relatively low spread between estimates. In these instances, an estimate of Q100 could be made with a fair degree of certainty. Of the remaining six catchments, three had areas greater than 1000km2, which means the FSR design storm estimate cannot be used. Armed with the ANN model and the FEH statistical method the hydrologist still has two possible estimates to consider. For these three catchments, the estimates were also fairly similar, providing additional confidence to the estimation. In summary, the findings of this study have shown that an accurate estimation of Q100 can be made using the catchment descriptors of an ungauged catchment as inputs to an ANN. It also demonstrated how the ANN Q100 estimates can be used in conjunction with a number of other estimates in order to provide a more accurate and confident estimate of Q100 at an ungauged catchment. This clearly exploits the strengths of existing methods in combination with the latest soft computing tools.
Chapinal, Núria; Schumaker, Brant A; Joly, Damien O; Elkin, Brett T; Stephen, Craig
2015-07-01
We estimated the sensitivity and specificity of the caudal-fold skin test (CFT), the fluorescent polarization assay (FPA), and the rapid lateral-flow test (RT) for the detection of Mycobacterium bovis in free-ranging wild wood bison (Bison bison athabascae), in the absence of a gold standard, by using Bayesian analysis, and then used those estimates to forecast the performance of a pairwise combination of tests in parallel. In 1998-99, 212 wood bison from Wood Buffalo National Park (Canada) were tested for M. bovis infection using CFT and two serologic tests (FPA and RT). The sensitivity and specificity of each test were estimated using a three-test, one-population, Bayesian model allowing for conditional dependence between FPA and RT. The sensitivity and specificity of the combination of CFT and each serologic test in parallel were calculated assuming conditional independence. The test performance estimates were influenced by the prior values chosen. However, the rank of tests and combinations of tests based on those estimates remained constant. The CFT was the most sensitive test and the FPA was the least sensitive, whereas RT was the most specific test and CFT was the least specific. In conclusion, given the fact that gold standards for the detection of M. bovis are imperfect and difficult to obtain in the field, Bayesian analysis holds promise as a tool to rank tests and combinations of tests based on their performance. Combining a skin test with an animal-side serologic test, such as RT, increases sensitivity in the detection of M. bovis and is a good approach to enhance disease eradication or control in wild bison.
Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods
NASA Astrophysics Data System (ADS)
Araya, S. N.; Ghezzehei, T. A.
2017-12-01
Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.
SUBTLEX-CH: Chinese Word and Character Frequencies Based on Film Subtitles
Cai, Qing; Brysbaert, Marc
2010-01-01
Background Word frequency is the most important variable in language research. However, despite the growing interest in the Chinese language, there are only a few sources of word frequency measures available to researchers, and the quality is less than what researchers in other languages are used to. Methodology Following recent work by New, Brysbaert, and colleagues in English, French and Dutch, we assembled a database of word and character frequencies based on a corpus of film and television subtitles (46.8 million characters, 33.5 million words). In line with what has been found in the other languages, the new word and character frequencies explain significantly more of the variance in Chinese word naming and lexical decision performance than measures based on written texts. Conclusions Our results confirm that word frequencies based on subtitles are a good estimate of daily language exposure and capture much of the variance in word processing efficiency. In addition, our database is the first to include information about the contextual diversity of the words and to provide good frequency estimates for multi-character words and the different syntactic roles in which the words are used. The word frequencies are freely available for research purposes. PMID:20532192
NASA Technical Reports Server (NTRS)
Komarevskiy,Nikolay; Shklover, Valery; Braginsky, Leonid; Hafner, Christian; Lawson, John W.
2012-01-01
During high-velocity atmospheric entries, space vehicles can be exposed to strong electromagnetic radiation from ionized gas in the shock layer. Glassy carbon (GC) and silicon carbide (SiC) are candidate thermal protection materials due to their high melting point and also their good thermal and mechanical properties. Based on data from shock tube experiments, a significant fraction of radiation at hypersonic entry conditions is in the frequency range from 215 to 415 THz. We propose and analyze SiC and GC photonic structures to increase the reflection of radiation in that range. For this purpose, we performed numerical optimizations of various structures using an evolutionary strategy. Among the considered structures are layered, porous, woodpile, inverse opal and guided-mode resonance structures. In order to estimate the impact of fabrication inaccuracies, the sensitivity of the reflectivity to structural imperfections is analyzed. We estimate that the reflectivity of GC photonic structures is limited to 38% in the aforementioned range, due to material absorption. However, GC material can be effective for photonic reflection of individual, strong spectral line. SiC on the other hand can be used to design a good reflector for the entire frequency range.
Komarevskiy, Nikolay; Shklover, Valery; Braginsky, Leonid; Hafner, Christian; Lawson, John
2012-06-18
During high-velocity atmospheric entries, space vehicles can be exposed to strong electromagnetic radiation from ionized gas in the shock layer. Glassy carbon (GC) and silicon carbide (SiC) are candidate thermal protection materials due to their high melting point and also their good thermal and mechanical properties. Based on data from shock tube experiments, a significant fraction of radiation at hypersonic entry conditions is in the frequency range from 215 to 415 THz. We propose and analyze SiC and GC photonic structures to increase the reflection of radiation in that range. For this purpose, we performed numerical optimizations of various structures using an evolutionary strategy. Among the considered structures are layered, porous, woodpile, inverse opal and guided-mode resonance structures. In order to estimate the impact of fabrication inaccuracies, the sensitivity of the reflectivity to structural imperfections is analyzed. We estimate that the reflectivity of GC photonic structures is limited to 38% in the aforementioned range, due to material absorption. However, GC material can be effective for photonic reflection of individual, strong spectral line. SiC on the other hand can be used to design a good reflector for the entire frequency range.
Measurements of primary cosmic-ray hydrogen and helium by the WiZard collaboration
NASA Astrophysics Data System (ADS)
Circella, M.; Ambriola, M.; Barbiellini, G.; Bartalucci, S.; Bellotti, R.; Bergström, D.; Bidoli, V.; Boezio, M.; Bonvicini, V.; Bravar, U.; Cafagna, F.; Carlson, P.; Casolino, M.; Ciacio, F.; Circella, M.; De Marzo, C.; De Pascale, M. P.; Finetti, N.; Francke, T.; Grimani, C.; Grinstein, S.; Hof, M.; Kremer, J.; Menn, W.; Mitchell, J. W.; Morselli, A.; Ormes, J. F.; Papini, P.; Piccardi, S.; Picozza, P.; Ricci, M.; Schiavon, P.; Simon, M.; Sparvoli, R.; Spillantini, P.; Stephens, S. A.; Stochaj, S. J.; Streitmatter, R. E.; Suffert, M.; Vacchi, A.; Weber, N.; Zampa, N.
We present the measurements of primary protons and helium nuclei performed by the WiZard Collaboration in different balloon-borne campaigns. A superconducting magnet spectrometer was used in these experiments together with detectors for particle recognition. These combinations of detectors made it possible to perform accurate particle measurements over a large (up to 200 GV for protons) energy interval. We focus in particular on the results from the MASS91 and CAPRICE94 experiments: We find a very good agreement between these two sets of measurements, also in comparison to other recent results. All these results seem to suggest that the normalization of primary cosmic rays may be significantly lower than previously estimated.
System Wide Joint Position Sensor Fault Tolerance in Robot Systems Using Cartesian Accelerometers
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.; Juang, Jer-Nan
1997-01-01
Joint position sensors are necessary for most robot control systems. A single position sensor failure in a normal robot system can greatly degrade performance. This paper presents a method to obtain position information from Cartesian accelerometers without integration. Depending on the number and location of the accelerometers. the proposed system can tolerate the loss of multiple position sensors. A solution technique suitable for real-time implementation is presented. Simulations were conducted using 5 triaxial accelerometers to recover from the loss of up to 4 joint position sensors on a 7 degree of freedom robot moving in general three dimensional space. The simulations show good estimation performance using non-ideal accelerometer measurements.
Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks
NASA Astrophysics Data System (ADS)
Karpov, Kirill; Fedotova, Irina; Siemens, Eduard
2017-07-01
In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The ALS project plan goals are reducing cost, improving performance, and achieving flight readiness. ALS selects projects to advance the mission readiness of low cost, high performance technologies. The role of metrics is to help select good projects and report progress. The Equivalent Mass (EM) of a system is the sum of the estimated mass of the hardware, of its required materials and spares, and of the pressurized volume, power supply, and cooling system needed to support the hardware in space. EM is the total payload launch mass needed to provide and support a system. EM is directly proportional to the launch cost.
Inlet-engine matching for SCAR including application of a bicone variable geometry inlet
NASA Technical Reports Server (NTRS)
Wasserbauer, J. F.; Gerstenmaier, W. H.
1978-01-01
Airflow characteristics of variable cycle engines (VCE) designed for Mach 2.32 can have transonic airflow requirements as high as 1.6 times the cruise airflow. This is a formidable requirement for conventional, high performance, axisymmetric, translating centerbody mixed compression inlets. An alternate inlet is defined, where the second cone of a two cone center body collapses to the initial cone angle to provide a large off-design airflow capability, and incorporates modest centerbody translation to minimize spillage drag. Estimates of transonic spillage drag are competitive with those of conventional translating centerbody inlets. The inlet's cruise performance exhibits very low bleed requirements with good recovery and high angle of attack capability.
SWAT system performance predictions
NASA Astrophysics Data System (ADS)
Parenti, Ronald R.; Sasiela, Richard J.
1993-03-01
In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.
NASA Astrophysics Data System (ADS)
Soriano-Disla, J. M.; Speir, T. W.; Gómez, I.; Clucas, L. M.; McLaren, R. G.; Navarro-Pedreño, J.
2009-04-01
The accumulation of heavy metals in soil from different sources (atmospheric deposition, agricultural practices, urban-industrial activities, etc.) is of a great environmental concern because of metal persistence and toxicity. In this sense, there is a consensus in the literature that the estimation of the bioavailable heavy metals in soil is a preferable tool to determine potential risks from soil contamination than the total contents. However, controversy exists around the definition of an accurate and universal bioavailability estimator that is useful for soils with different properties, since many factors control this parameter. Thus, the main objective of this work was to compare the effectiveness of different methods to predict heavy metals plant uptake from soils with different properties and heavy metal contents. For the development of the present work, 30 contrasting soils from New Zealand and Spain were selected. Apart from the analysis of the basic soil properties, different methods to estimate heavy metal bioavailability were performed: total heavy metals, DTPA-extractable soil metals, diffusive gradient technique (DGT), and total heavy metals in soil solution. In these soils, a bioassay using wheat (Triticum aestivum) was carried out in a constant environment room for 25 days (12 hours photoperiod, day and night temperature of 20°C and 15°C respectively). After this time, the plants were divided in roots and shoots and heavy metal content was analysed in each part. Simple correlations were performed comparing the phytoavailable contents with the bioavailability estimated by the different methods. As expected, higher heavy metal concentrations were found in roots compared with shoots. Comparing the theoretical available heavy metals estimated by the different methods with the root and shoot uptake, better correlations were found with the root contents, thus, the discussion is based in the comparisons with the uptake by this part of the plant. According to the results, DTPA seemed to be the extractant that best estimated plant uptake (except for Cd, not estimated by any of the methods used). Similar good results were found using the total heavy metal contents, except for Ni and Zn. DGT also worked well, but its use for Pb is not advisable, since many values were below the detection level. The heavy metals in soil solution were less successful for predicting plant uptake. In general, the good results obtained for Cr and Zn seemed to be influenced by a few high values found in some soils. Taking this point into account, the soils with very high levels of these heavy metals were removed from the analysis and simple correlations were done again with the remaining soils having a lower range of these metals. For the case of Cr, four soils were removed (soils with ten times or more total Cr than the average of the others 26 samples) and three for the case of Zn (soils with two times or more total Zn than the average of the others 27 samples). After this, the correlations with total heavy metals and DTPA became very weak, being the heavy metals in soil solution for Cr, and DGT for Zn, the methods that best estimated the plant uptake of these metals. This work has proved the importance of careful revision of the data distribution, since good results can be influenced by just few samples with high values. In this sense and as a conclusion, DTPA and total heavy metals followed similar patterns and were good predictors of Cu and Pb uptake, and useful to distinguish between low and high values for Cr and Zn. On the other hand, DGT and heavy metals in soil solution showed a similar effectiveness to estimate Cu, Ni, Pb, Zn and Cr, but DGT presented, in general, higher correlation levels (except for Cr). Taking all of the results together, it seems that the most robust and efficient estimator for all metals studied (except Cd, impossible to predict with any of the methods used) was the DGT. Acknowledgements: Jose. M. Soriano-Disla gratefully acknowledges the Spanish Ministry of Innovation and Culture for a research fellowship (AP2005-0320).
A fast least-squares algorithm for population inference
2013-01-01
Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408
A fast least-squares algorithm for population inference.
Parry, R Mitchell; Wang, May D
2013-01-23
Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.
Online Updating of Statistical Inference in the Big Data Setting.
Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui
2016-01-01
We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.
Estimating outcomes in newborn infants using fuzzy logic
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando C.
2014-01-01
OBJECTIVE: To build a linguistic model using the properties of fuzzy logic to estimate the risk of death of neonates admitted to a Neonatal Intensive Care Unit. METHODS: Computational model using fuzzy logic. The input variables of the model were birth weight, gestational age, 5th-minute Apgar score and inspired fraction of oxygen in newborn infants admitted to a Neonatal Intensive Care Unit of Taubaté, Southeast Brazil. The output variable was the risk of death, estimated as a percentage. Three membership functions related to birth weight, gestational age and 5th-minute Apgar score were built, as well as two functions related to the inspired fraction of oxygen; the risk presented five membership functions. The model was developed using the Mandani inference by means of Matlab(r) software. The model values were compared with those provided by experts and their performance was estimated by ROC curve. RESULTS: 100 newborns were included, and eight of them died. The model estimated an average possibility of death of 49.7±29.3%, and the possibility of hospital discharge was 24±17.5%. These values are different when compared by Student's t-test (p<0.001). The correlation test revealed r=0.80 and the performance of the model was 81.9%. CONCLUSIONS: This predictive, non-invasive and low cost model showed a good accuracy and can be applied in neonatal care, given the easiness of its use. PMID:25119746
A study comparison of two system model performance in estimated lifted index over Indonesia.
NASA Astrophysics Data System (ADS)
lestari, Juliana tri; Wandala, Agie
2018-05-01
Lifted index (LI) is one of atmospheric stability indices that used for thunderstorm forecasting. Numerical weather Prediction Models are essential for accurate weather forecast these day. This study has completed the attempt to compare the two NWP models these are Weather Research Forecasting (WRF) model and Global Forecasting System (GFS) model in estimates LI at 20 locations over Indonesia and verified the result with observation. Taylor diagram was used to comparing the models skill with shown the value of standard deviation, coefficient correlation and Root mean square error (RMSE). This study using the dataset on 00.00 UTC and 12.00 UTC during mid-March to Mid-April 2017. From the sample of LI distributions, both models have a tendency to overestimated LI value in almost all region in Indonesia while the WRF models has the better ability to catch the LI pattern distribution with observation than GFS model has. The verification result shows how both WRF and GFS model have such a weak relationship with observation except Eltari meteorologi station that its coefficient correlation reach almost 0.6 with the low RMSE value. Mean while WRF model have a better performance than GFS model. This study suggest that estimated LI of WRF model can provide the good performance for Thunderstorm forecasting over Indonesia in the future. However unsufficient relation between output models and observation in the certain location need a further investigation.
NASA Astrophysics Data System (ADS)
Uprety, Bibhisha; Kim, Sungwon; Mathews, V. John; Adams, Daniel O.
2015-03-01
Acoustic Emission (AE) based Structural Health Monitoring (SHM) is of great interest for detecting impact damage in composite structures. Within the aerospace industry the need to detect and locate these events, even when no visible damage is present, is important both from the maintenance and design perspectives. In this investigation, four commercially available piezoelectric sensors were evaluated for usage in an AE-based SHM system. Of particular interest was comparing the acoustic response of the candidate piezoelectric sensors for impact location estimations as well as damage classification resulting from the impact in fiber-reinforced composite structures. Sensor assessment was performed based on response signal characterization and performance for active testing at 300 kHz and steel-ball drop testing using both aluminum and carbon/epoxy composite plates. Wave mode velocities calculated from the measured arrival times were found to be in good agreement with predictions obtained using both the Disperse code and finite element analysis. Differences in the relative strength of the received wave modes, the overall signal strengths and signal-to-noise ratios were observed through the use of both active testing as well as passive steel-ball drop testing. Further comparative is focusing on assessing AE sensor performance for use in impact location estimation algorithms as well as detecting and classifying damage produced in composite structures due to impact events.
Ambler, Gareth; Omar, Rumana Z; Royston, Patrick
2007-06-01
Risk models that aim to predict the future course and outcome of disease processes are increasingly used in health research, and it is important that they are accurate and reliable. Most of these risk models are fitted using routinely collected data in hospitals or general practices. Clinical outcomes such as short-term mortality will be near-complete, but many of the predictors may have missing values. A common approach to dealing with this is to perform a complete-case analysis. However, this may lead to overfitted models and biased estimates if entire patient subgroups are excluded. The aim of this paper is to investigate a number of methods for imputing missing data to evaluate their effect on risk model estimation and the reliability of the predictions. Multiple imputation methods, including hotdecking and multiple imputation by chained equations (MICE), were investigated along with several single imputation methods. A large national cardiac surgery database was used to create simulated yet realistic datasets. The results suggest that complete case analysis may produce unreliable risk predictions and should be avoided. Conditional mean imputation performed well in our scenario, but may not be appropriate if using variable selection methods. MICE was amongst the best performing multiple imputation methods with regards to the quality of the predictions. Additionally, it produced the least biased estimates, with good coverage, and hence is recommended for use in practice.
Population genetics of autopolyploids under a mixed mating model and the estimation of selfing rate.
Hardy, Olivier J
2016-01-01
Nowadays, the population genetics analysis of autopolyploid species faces many difficulties due to (i) limited development of population genetics tools under polysomic inheritance, (ii) difficulties to assess allelic dosage when genotyping individuals and (iii) a form of inbreeding resulting from the mechanism of 'double reduction'. Consequently, few data analysis computer programs are applicable to autopolyploids. To contribute bridging this gap, this article first derives theoretical expectations for the inbreeding and identity disequilibrium coefficients under polysomic inheritance in a mixed mating model. Moment estimators of these coefficients are proposed when exact genotypes or just markers phenotypes (i.e. allelic dosage unknown) are available. This led to the development of estimators of the selfing rate based on adult genotypes or phenotypes and applicable to any even-ploidy level. Their statistical performances and robustness were assessed by numerical simulations. Contrary to inbreeding-based estimators, the identity disequilibrium-based estimator using phenotypes is robust (absolute bias generally < 0.05), even in the presence of double reduction, null alleles or biparental inbreeding due to isolation by distance. A fairly good precision of the selfing rate estimates (root mean squared error < 0.1) is already achievable using a sample of 30-50 individuals phenotyped at 10 loci bearing 5-10 alleles each, conditions reachable using microsatellite markers. Diallelic markers (e.g. SNP) can also perform satisfactorily in diploids and tetraploids but more polymorphic markers are necessary for higher ploidy levels. The method is implemented in the software SPAGeDi and should contribute to reduce the lack of population genetics tools applicable to autopolyploids. © 2015 John Wiley & Sons Ltd.
Stature estimation equations for South Asian skeletons based on DXA scans of contemporary adults.
Pomeroy, Emma; Mushrif-Tripathy, Veena; Wells, Jonathan C K; Kulkarni, Bharati; Kinra, Sanjay; Stock, Jay T
2018-05-03
Stature estimation from the skeleton is a classic anthropological problem, and recent years have seen the proliferation of population-specific regression equations. Many rely on the anatomical reconstruction of stature from archaeological skeletons to derive regression equations based on long bone lengths, but this requires a collection with very good preservation. In some regions, for example, South Asia, typical environmental conditions preclude the sufficient preservation of skeletal remains. Large-scale epidemiological studies that include medical imaging of the skeleton by techniques such as dual-energy X-ray absorptiometry (DXA) offer new potential datasets for developing such equations. We derived estimation equations based on known height and bone lengths measured from DXA scans from the Andhra Pradesh Children and Parents Study (Hyderabad, India). Given debates on the most appropriate regression model to use, multiple methods were compared, and the performance of the equations was tested on a published skeletal dataset of individuals with known stature. The equations have standard errors of estimates and prediction errors similar to those derived using anatomical reconstruction or from cadaveric datasets. As measured by the number of significant differences between true and estimated stature, and the prediction errors, the new equations perform as well as, and generally better than, published equations commonly used on South Asian skeletons or based on Indian cadaveric datasets. This study demonstrates the utility of DXA scans as a data source for developing stature estimation equations and offer a new set of equations for use with South Asian datasets. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hosemann, P.; Swadener, J. G.; Kiener, D.; Was, G. S.; Maloy, S. A.; Li, N.
2008-03-01
The superior properties of ferritic/martensitic steels in a radiation environment (low swelling, low activation under irradiation and good corrosion resistance) make them good candidates for structural parts in future reactors and spallation sources. While it cannot substitute for true reactor experiments, irradiation by charged particles from accelerators can reduce the number of reactor experiments and support fundamental research for a better understanding of radiation effects in materials. Based on the nature of low energy accelerator experiments, only a small volume of material can be uniformly irradiated. Micro and nanoscale post irradiation tests thus have to be performed. We show here that nanoindentation and micro-compression testing on T91 and HT-9 stainless steel before and after ion irradiation are useful methods to evaluate the radiation induced hardening.
Levey, Janet A
2017-08-01
Nurse educators might be unknowingly excluding learners secondary to teaching practices. Universal design for instruction (UDI) prepares and delivers accessible content and learning environments for diverse learners; however, it is not well known in nursing education. The aim of the study was to examine the psychometric properties of the Inclusive Teaching Strategies in Nursing Education (ITSinNE) 55-item instrument. Confirmatory factor analysis was performed on a sample of 311 educators in prelicensure programs. The ITSinNE scales had good to adequate estimates of reliability. The exogenous model fit the sample and model-implied covariance matrix; however, the endogenous model was not a good fit. Further instrument development is required. Measuring factors influencing nurse educators' willingness to adopt UDI will enable intervention research to enhance professional development fostering content and environmental access for all learners.
WTA estimates using the method of paired comparison: tests of robustness
Patricia A. Champ; John B. Loomis
1998-01-01
The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...
Tarantilis, Filippos; Athanasakis, Kostas; Zavras, Dimitris; Vozikis, Athanassios; Kyriopoulos, Ioannis
2015-01-05
During the past decades, smoking prevalence in Greece was estimated to be near or over 40%. Following a sharp fall in cigarette consumption, as shown in current data, our objective is to assess smokers' sensitivity to cigarette price and consumer income changes as well as to project health benefits of an additional tax increase. Cigarette consumption was considered as the dependent variable, with Weighted Average Price as a proxy for cigarette price, gross domestic product as a proxy for consumers' income and dummy variables reflecting smoking restrictions and antismoking campaigns. Values were computed to natural logarithms and regression was performed. Then, four scenarios of tax increase were distinguished in order to calculate potential health benefits. Short-run price elasticity is estimated at -0.441 and short-run income elasticity is estimated at 1.040. Antismoking campaigns were found to have a statistically significant impact on consumption. Results indicate that, depending on the level of tax increase, annual per capita consumption could fall by at least 209.83 cigarettes; tax revenue could rise by more than €0.74 billion, while smokers could be reduced by up to 530 568 and at least 465 smoking-related deaths could be averted. Price elasticity estimates are similar to previous studies in Greece, while income elasticity estimates are far greater. With cigarettes regarded as a luxury good, a great opportunity is presented for decisionmakers to counter smoking. Increased taxation, along with focused antismoking campaigns, law reinforcement (to ensure compliance with smoking bans) and intensive control for smuggling could invoke a massive blow to the tobacco epidemic in Greece. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
An accurate behavioral model for single-photon avalanche diode statistical performance simulation
NASA Astrophysics Data System (ADS)
Xu, Yue; Zhao, Tingchen; Li, Ding
2018-01-01
An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuentes, H.R.; Polzer, W.L.; Essington, E.H.
1989-11-01
Lithium (Li{sup +}) was introduced as lithium bromide (LiBr), as a retarded tracer for experiments in the C-wells complex at Yucca Mountain, Nevada Test Site, Nevada. The objective was to evaluate the potential of lithium to sorb predominately by physical forces. lithium was selected as a candidate tracer on the basis of high solubility, good chemical and biological stability, and relatively low sorptivity; lack of bioaccumulation and exclusion as a priority pollutant in pertinent federal environmental regulations; good analytical detectability and low natural background concentrations; and a low cost Laboratory experiments were performed with suspensions of Prow Pass cuttings frommore » drill hole UE-25p{number_sign}1 at depths between 549 and 594 m in J-13 water at a pH of approximately 8 and in the temperature range of 25{degree}C to 45{degree}C. Batch equilibrium and kinetics experiments were performed; estimated thermodynamic constants, relative behavior between adsorption and desorption, and potentiometric studies provided information to infer the physical nature of lithium sorption.« less
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
NASA Technical Reports Server (NTRS)
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
Predicting vertical jump height from bar velocity.
García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén
2015-06-01
The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s(-2)). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r(2) = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r(2) = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key pointsVertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer.The relationship between the point at which bar acceleration is less than -9.81 m·s(-2) and the real take-off is affected by the velocity of movement.Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance.
Predicting Vertical Jump Height from Bar Velocity
García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén
2015-01-01
The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s-2). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r2 = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r2 = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key points Vertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer. The relationship between the point at which bar acceleration is less than -9.81 m·s-2 and the real take-off is affected by the velocity of movement. Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance. PMID:25983572
A generalized right truncated bivariate Poisson regression model with applications to health data.
Islam, M Ataharul; Chowdhury, Rafiqul I
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.
A generalized right truncated bivariate Poisson regression model with applications to health data
Islam, M. Ataharul; Chowdhury, Rafiqul I.
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344
Robust adaptive uniform exact tracking control for uncertain Euler-Lagrange system
NASA Astrophysics Data System (ADS)
Yang, Yana; Hua, Changchun; Li, Junpeng; Guan, Xinping
2017-12-01
This paper offers a solution to the robust adaptive uniform exact tracking control for uncertain nonlinear Euler-Lagrange (EL) system. An adaptive finite-time tracking control algorithm is designed by proposing a novel nonsingular integral terminal sliding-mode surface. Moreover, a new adaptive parameter tuning law is also developed by making good use of the system tracking errors and the adaptive parameter estimation errors. Thus, both the trajectory tracking and the parameter estimation can be achieved in a guaranteed time adjusted arbitrarily based on practical demands, simultaneously. Additionally, the control result for the EL system proposed in this paper can be extended to high-order nonlinear systems easily. Finally, a test-bed 2-DOF robot arm is set-up to demonstrate the performance of the new control algorithm.
NASA Astrophysics Data System (ADS)
Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba
2018-10-01
This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.
Estimating Dense Cardiac 3D Motion Using Sparse 2D Tagged MRI Cross-sections*
Ardekani, Siamak; Gunter, Geoffrey; Jain, Saurabh; Weiss, Robert G.; Miller, Michael I.; Younes, Laurent
2015-01-01
In this work, we describe a new method, an extension of the Large Deformation Diffeomorphic Metric Mapping to estimate three-dimensional deformation of tagged Magnetic Resonance Imaging Data. Our approach relies on performing non-rigid registration of tag planes that were constructed from set of initial reference short axis tag grids to a set of deformed tag curves. We validated our algorithm using in-vivo tagged images of normal mice. The mapping allows us to compute root mean square distance error between simulated tag curves in a set of long axis image planes and the acquired tag curves in the same plane. Average RMS error was 0.31±0.36(SD) mm, which is approximately 2.5 voxels, indicating good matching accuracy. PMID:25571140
NASA Astrophysics Data System (ADS)
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
ASYMPTOTIC DISTRIBUTION OF ΔAUC, NRIs, AND IDI BASED ON THEORY OF U-STATISTICS
Demler, Olga V.; Pencina, Michael J.; Cook, Nancy R.; D’Agostino, Ralph B.
2017-01-01
The change in AUC (ΔAUC), the IDI, and NRI are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues we unite the ΔAUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ΔAUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ΔAUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ΔAUC, NRIs, or IDI. In the former case SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ΔAUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ΔAUC. PMID:28627112
Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.
Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B
2017-09-20
The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
Estimating learning outcomes from pre- and posttest student self-assessments: a longitudinal study.
Schiekirka, Sarah; Reinhardt, Deborah; Beißbarth, Tim; Anders, Sven; Pukrop, Tobias; Raupach, Tobias
2013-03-01
Learning outcome is an important measure for overall teaching quality and should be addressed by comprehensive evaluation tools. The authors evaluated the validity of a novel evaluation tool based on student self-assessments, which may help identify specific strengths and weaknesses of a particular course. In 2011, the authors asked 145 fourth-year students at Göttingen Medical School to self-assess their knowledge on 33 specific learning objectives in a pretest and posttest as part of a cardiorespiratory module. The authors compared performance gain calculated from self-assessments with performance gain derived from formative examinations that were closely matched to these 33 learning objectives. Eighty-three students (57.2%) completed the assessment. There was good agreement between performance gain derived from subjective data and performance gain derived from objective examinations (Pearson r=0.78; P<.0001) on the group level. The association between the two measures was much weaker when data were analyzed on the individual level. Further analysis determined a quality cutoff for performance gain derived from aggregated student self-assessments. When using this cutoff, the evaluation tool was highly sensitive in identifying specific learning objectives with favorable or suboptimal objective performance gains. The tool is easy to implement, takes initial performance levels into account, and does not require extensive pre-post testing. By providing valid estimates of actual performance gain obtained during a teaching module, it may assist medical teachers in identifying strengths and weaknesses of a particular course on the level of specific learning objectives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less
Hill, J; Fudge, H; Harrington, R; Pickles, A; Rutter, M
1995-03-01
The Adult Personality Functioning Assessment (APFA) provides ratings of interpersonal and social role performance in six domains over substantial periods of time. Ratings based on subject and informant accounts using the APFA were compared. There was good agreement for estimates of levels of dysfunction, and moderate agreement for type of dysfunction. An anticipated under-reporting of difficulties by subjects was not found. The extent of personality dysfunction was predictive of whether a close informant was available; however, closeness of informant was not consistently associated with subject-informant agreement.
Fluoroalkylated Silicon-Containing Surfaces - Estimation of Solid Surface Energy
2010-10-20
surface tension liquids such as octane (γlv = 21.6 mN/m) and methanol (γlv = 22.7 mN/m), requires an appropriately chosen surface micro/nano-texture in...addition to a low solid surface energy (γsv). 1H,1H,2H,2H- Heptadecafluorodecyl polyhedral oligomeric silsesquioxane (fluorodecyl POSS) offers one of...27.5 mN/m), while Girifalco-Good analysis was performed using a set of polar and non-polar liquids with a wider range of liquid surface tension (15.5
Evaluation of the performance of microprocessor-based colorimeter
Randhawa, S. S.; Gupta, R. C.; Bhandari, A. K.; Malhotra, P. S.
1992-01-01
Colorimetric estimations have an important role in quantitative studies. An inexpensive and portable microprocessor-based colorimeter developed by the authors is described in this paper. The colorimeter uses a light emitting diode as the light source; a pinphotodiode as the detector and an 8085A microprocessor. Blood urea, glucose, total protein, albumin and bilirubin from patient blood samples were analysed with the instrument and results obtained were compared with assays of the same blood using a Spectronic 21. A good correlation was found between the results from the two instruments. PMID:18924952
Evaluation of the performance of microprocessor-based colorimeter.
Randhawa, S S; Gupta, R C; Bhandari, A K; Malhotra, P S
1992-01-01
Colorimetric estimations have an important role in quantitative studies. An inexpensive and portable microprocessor-based colorimeter developed by the authors is described in this paper. The colorimeter uses a light emitting diode as the light source; a pinphotodiode as the detector and an 8085A microprocessor. Blood urea, glucose, total protein, albumin and bilirubin from patient blood samples were analysed with the instrument and results obtained were compared with assays of the same blood using a Spectronic 21. A good correlation was found between the results from the two instruments.
Neumann, G; Gartzke, J; Faber, G
1978-01-01
The modified thin layer chromatographic method for the determination of the phospholipids lecithin and sphingomyelin from amniotic fluid is useful in estimating fetal pulmonary maturity. The foam test of Clements is a simple rapid method for screening of suspicious cases of pregnancies at risk and of great value as bed side test even performing by the doctor. In comparing Clements-Test with thin layer chromatographic for L/S-Ratio determination we found a good correlation of 81,8% of all cases.
UAV Control on the Basis of 3D Landmark Bearing-Only Observations.
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-11-27
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks' position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.
Chen, Cheng-Te; Chen, Yu-Lan; Lin, Yu-Ching; Hsieh, Ching-Lin; Tzeng, Jeng-Yi
2018-01-01
Objective The purpose of this study was to construct a computerized adaptive test (CAT) for measuring self-care performance (the CAT-SC) in children with developmental disabilities (DD) aged from 6 months to 12 years in a content-inclusive, precise, and efficient fashion. Methods The study was divided into 3 phases: (1) item bank development, (2) item testing, and (3) a simulation study to determine the stopping rules for the administration of the CAT-SC. A total of 215 caregivers of children with DD were interviewed with the 73-item CAT-SC item bank. An item response theory model was adopted for examining the construct validity to estimate item parameters after investigation of the unidimensionality, equality of slope parameters, item fitness, and differential item functioning (DIF). In the last phase, the reliability and concurrent validity of the CAT-SC were evaluated. Results The final CAT-SC item bank contained 56 items. The stopping rules suggested were (a) reliability coefficient greater than 0.9 or (b) 14 items administered. The results of simulation also showed that 85% of the estimated self-care performance scores would reach a reliability higher than 0.9 with a mean test length of 8.5 items, and the mean reliability for the rest was 0.86. Administering the CAT-SC could reduce the number of items administered by 75% to 84%. In addition, self-care performances estimated by the CAT-SC and the full item bank were very similar to each other (Pearson r = 0.98). Conclusion The newly developed CAT-SC can efficiently measure self-care performance in children with DD whose performances are comparable to those of TD children aged from 6 months to 12 years as precisely as the whole item bank. The item bank of the CAT-SC has good reliability and a unidimensional self-care construct, and the CAT can estimate self-care performance with less than 25% of the items in the item bank. Therefore, the CAT-SC could be useful for measuring self-care performance in children with DD in clinical and research settings. PMID:29561879
Chen, Cheng-Te; Chen, Yu-Lan; Lin, Yu-Ching; Hsieh, Ching-Lin; Tzeng, Jeng-Yi; Chen, Kuan-Lin
2018-01-01
The purpose of this study was to construct a computerized adaptive test (CAT) for measuring self-care performance (the CAT-SC) in children with developmental disabilities (DD) aged from 6 months to 12 years in a content-inclusive, precise, and efficient fashion. The study was divided into 3 phases: (1) item bank development, (2) item testing, and (3) a simulation study to determine the stopping rules for the administration of the CAT-SC. A total of 215 caregivers of children with DD were interviewed with the 73-item CAT-SC item bank. An item response theory model was adopted for examining the construct validity to estimate item parameters after investigation of the unidimensionality, equality of slope parameters, item fitness, and differential item functioning (DIF). In the last phase, the reliability and concurrent validity of the CAT-SC were evaluated. The final CAT-SC item bank contained 56 items. The stopping rules suggested were (a) reliability coefficient greater than 0.9 or (b) 14 items administered. The results of simulation also showed that 85% of the estimated self-care performance scores would reach a reliability higher than 0.9 with a mean test length of 8.5 items, and the mean reliability for the rest was 0.86. Administering the CAT-SC could reduce the number of items administered by 75% to 84%. In addition, self-care performances estimated by the CAT-SC and the full item bank were very similar to each other (Pearson r = 0.98). The newly developed CAT-SC can efficiently measure self-care performance in children with DD whose performances are comparable to those of TD children aged from 6 months to 12 years as precisely as the whole item bank. The item bank of the CAT-SC has good reliability and a unidimensional self-care construct, and the CAT can estimate self-care performance with less than 25% of the items in the item bank. Therefore, the CAT-SC could be useful for measuring self-care performance in children with DD in clinical and research settings.
Borque, Paloma; Luke, Edward; Kollias, Pavlos
2016-05-27
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borque, Paloma; Luke, Edward; Kollias, Pavlos
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
Noroozi, Javad; Paluch, Andrew S
2017-02-23
Molecular dynamics simulations were employed to both estimate the solubility of nonelectrolyte solids, such as acetanilide, acetaminophen, phenacetin, methylparaben, and lidocaine, in supercritical carbon dioxide and understand the underlying molecular-level driving forces. The solubility calculations involve the estimation of the solute's limiting activity coefficient, which may be computed using conventional staged free-energy calculations. For the case of lidocaine, wherein the infinite dilution approximation is not appropriate, we demonstrate how the activity coefficient at finite concentrations may be estimated without additional effort using the dilute solution approximation and how this may be used to further understand the solvation process. Combining with experimental pure-solid properties, namely, the normal melting point and enthalpy of fusion, solubilities were estimated. The results are in good quantitative agreement with available experimental data, suggesting that molecular simulations may be a powerful tool for understanding supercritical processes and the design of carbon dioxide-philic molecular systems. Structural analyses were performed to shed light on the microscopic details of the solvation of different functional groups by carbon dioxide and the observed solubility trends.
Determining team cognition from delay analysis using cross recurrence plot.
Hajari, Nasim; Cheng, Irene; Bin Zheng; Basu, Anup
2016-08-01
Team cognition is an important factor in evaluating and determining team performance. Forming a team with good shared cognition is even more crucial for laparoscopic surgery applications. In this study, we analyzed the eye tracking data of two surgeons during a laparoscopic simulation operation, then performed Cross Recurrence Analysis (CRA) on the recorded data to study the delay behaviour for good performer and poor performer teams. Dual eye tracking data for twenty two dyad teams were recorded during a laparoscopic task and then the teams were divided into good performer and poor performer teams based on the task times. Eventually we studied the delay between two team members for good and poor performer teams. The results indicated that the good performer teams show a smaller delay comparing to poor performer teams. This study is compatible with gaze overlap analysis between team members and therefore it is a good evidence of shared cognition between team members.
Weimer, David L; Vining, Aidan R; Thomas, Randall K
2009-02-01
The valuation of changes in consumption of addictive goods resulting from policy interventions presents a challenge for cost-benefit analysts. Consumer surplus losses from reduced consumption of addictive goods that are measured relative to market demand schedules overestimate the social cost of cessation interventions. This article seeks to show that consumer surplus losses measured using a non-addicted demand schedule provide a better assessment of social cost. Specifically, (1) it develops an addiction model that permits an estimate of the smoker's compensating variation for the elimination of addiction; (2) it employs a contingent valuation survey of current smokers to estimate their willingness-to-pay (WTP) for a treatment that would eliminate addiction; (3) it uses the estimate of WTP from the survey to calculate the fraction of consumer surplus that should be viewed as consumer value; and (4) it provides an estimate of this fraction. The exercise suggests that, as a tentative first and rough rule-of-thumb, only about 75% of the loss of the conventionally measured consumer surplus should be counted as social cost for policies that reduce the consumption of cigarettes. Additional research to estimate this important rule-of-thumb is desirable to address the various caveats relevant to this study. Copyright (c) 2008 John Wiley & Sons, Ltd.
Effect of discharge duct geometry on centrifugal fan performance and noise emission
NASA Astrophysics Data System (ADS)
Nelson, David A.; Butrymowicz, William; Thomas, Christopher
2005-09-01
Non-ideal inlet and discharge duct geometries can cause significant changes to both the aerodynamic performance (``fan curve'') and specific sound power emission of a fan. A proper understanding of actual installed performance, as well as a good estimate of the system backpressure curve, is critical to achieving flow and acoustic goals as well as other criteria such as power consumption, mass and volume. To this end a battery of ISO 10302 tests was performed on a blower assembly which supports the Advanced Animal Habitat, being developed by ORBITEC for deployment on the International Space Station. The blower assembly consists of (4) identical centrifugal fans that, amongst themselves and across two prototypes, incorporated several discharge geometries. The inlet geometries were identical in all cases. Thus by comparing the dimensionless pressure-flow and noise emission characteristics across the cases, significant insight into the nature and potential magnitude of these effects is gained.
Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.
2007-01-01
Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.
Estimating nonrigid motion from inconsistent intensity with robust shape features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095
2013-12-15
Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less
Estimating nonrigid motion from inconsistent intensity with robust shape features.
Liu, Wenyang; Ruan, Dan
2013-12-01
To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.
Analysis of Wind Tunnel Oscillatory Data of the X-31A Aircraft
NASA Technical Reports Server (NTRS)
Smith, Mark S.
1999-01-01
Wind tunnel oscillatory tests in pitch, roll, and yaw were performed on a 19%-scale model of the X-31A aircraft. These tests were used to study the aerodynamic characteristics of the X-31A in response to harmonic oscillations at six frequencies. In-phase and out-of-phase components of the aerodynamic coefficients were obtained over a range of angles of attack from 0 to 90 deg. To account for the effect of frequency on the data, mathematical models with unsteady terms were formulated by use of two different indicial functions. Data from a reduced set of frequencies were used to estimate model parameters, including steady-state static and dynamic stability derivatives. Both models showed good prediction capability and the ability to accurately fit the measured data. Estimated static stability derivatives compared well with those obtained from static wind tunnel tests. The roll and yaw rate derivative estimates were compared with rotary-balanced wind tunnel data and theoretical predictions. The estimates and theoretical predictions were in agreement at small angles of attack. The rotary-balance data showed, in general, acceptable agreement with the steady-state derivative estimates.
Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G
2014-07-02
Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.
Spectral estimation for characterization of acoustic aberration.
Varslot, Trond; Angelsen, Bjørn; Waag, Robert C
2004-07-01
Spectral estimation based on acoustic backscatter from a motionless stochastic medium is described for characterization of aberration in ultrasonic imaging. The underlying assumptions for the estimation are: The correlation length of the medium is short compared to the length of the transmitted acoustic pulse, an isoplanatic region of sufficient size exists around the focal point, and the backscatter can be modeled as an ergodic stochastic process. The motivation for this work is ultrasonic imaging with aberration correction. Measurements were performed using a two-dimensional array system with 80 x 80 transducer elements and an element pitch of 0.6 mm. The f number for the measurements was 1.2 and the center frequency was 3.0 MHz with a 53% bandwidth. Relative phase of aberration was extracted from estimated cross spectra using a robust least-mean-square-error method based on an orthogonal expansion of the phase differences of neighboring wave forms as a function of frequency. Estimates of cross-spectrum phase from measurements of random scattering through a tissue-mimicking aberrator have confidence bands approximately +/- 5 degrees wide. Both phase and magnitude are in good agreement with a reference characterization obtained from a point scatterer.
Requirements Flowdown for Prognostics and Health Management
NASA Technical Reports Server (NTRS)
Goebel, Kai; Saxena, Abhinav; Roychoudhury, Indranil; Celaya, Jose R.; Saha, Bhaskar; Saha, Sankalita
2012-01-01
Prognostics and Health Management (PHM) principles have considerable promise to change the game of lifecycle cost of engineering systems at high safety levels by providing a reliable estimate of future system states. This estimate is a key for planning and decision making in an operational setting. While technology solutions have made considerable advances, the tie-in into the systems engineering process is lagging behind, which delays fielding of PHM-enabled systems. The derivation of specifications from high level requirements for algorithm performance to ensure quality predictions is not well developed. From an engineering perspective some key parameters driving the requirements for prognostics performance include: (1) maximum allowable Probability of Failure (PoF) of the prognostic system to bound the risk of losing an asset, (2) tolerable limits on proactive maintenance to minimize missed opportunity of asset usage, (3) lead time to specify the amount of advanced warning needed for actionable decisions, and (4) required confidence to specify when prognosis is sufficiently good to be used. This paper takes a systems engineering view towards the requirements specification process and presents a method for the flowdown process. A case study based on an electric Unmanned Aerial Vehicle (e-UAV) scenario demonstrates how top level requirements for performance, cost, and safety flow down to the health management level and specify quantitative requirements for prognostic algorithm performance.
The “Dry-Run” Analysis: A Method for Evaluating Risk Scores for Confounding Control
Wyss, Richard; Hansen, Ben B.; Ellis, Alan R.; Gagne, Joshua J.; Desai, Rishi J.; Glynn, Robert J.; Stürmer, Til
2017-01-01
Abstract A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the “dry-run” analysis, which divides the unexposed population into “pseudo-exposed” and “pseudo-unexposed” groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models. PMID:28338910
Detrending moving average algorithm for multifractals
NASA Astrophysics Data System (ADS)
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
Respiratory rate estimation from the built-in cameras of smartphones and tablets.
Nam, Yunyoung; Lee, Jinseok; Chon, Ki H
2014-04-01
This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates. Overall, the VFCDM method provided the best results for accuracy (smaller median error), consistency (smaller interquartile range of the median value), and computational efficiency (less than 0.5 s on 1 min of data using a MATLAB implementation) to extract breathing rates that varied from 12 to 36 breaths/min. The AR method provided the least accurate respiratory rate estimation among the three methods. This work illustrates that both heart rates and normal breathing rates can be accurately derived from a video signal obtained from smartphones, an MP3 player and tablets with or without a flashlight.
Estimating the R-curve from residual strength data
NASA Technical Reports Server (NTRS)
Orange, T. W.
1985-01-01
A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.
Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis
NASA Astrophysics Data System (ADS)
Das, Samiran
2018-04-01
The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.
NASA Astrophysics Data System (ADS)
Castedo, Ricardo; de la Vega-Panizo, Rogelio; Fernández-Hernández, Marta; Paredes, Carlos
2015-02-01
A key requirement for effective coastal zone management is good knowledge of historical rates of change and the ability to predict future shoreline evolution, especially for rapidly eroding areas. Historical shoreline recession analysis was used for the prediction of future cliff shoreline positions along a section of 9 km between Bridlington and Hornsea, on the northern area of the Holderness Coast, UK. The analysis was based on historical maps and aerial photographs dating from 1852 to 2011 using the Digital Shoreline Analysis System (DSAS) 4.3, extension of ESRI's ArcInfo 10.×. The prediction of future shorelines was performed for the next 40 years using a variety of techniques, ranging from extrapolation from historical data, geometric approaches like the historical trend analysis, to a process-response numerical model that incorporates physically-based equations and geotechnical stability analysis. With climate change and sea-level rise implying that historical rates of change may not be a reliable guide for the future, enhanced visualization of the evolving coastline has the potential to improve awareness of these changing conditions. Following the IPCC, 2013 report, two sea-level rise rates, 2 mm/yr and 6 mm/yr, have been used to estimate future shoreline conditions. This study illustrated that good predictive models, once their limitations are estimated or at least defined, are available for use by managers, planners, engineers, scientists and the public to make better decisions regarding coastal management, development, and erosion-control strategies.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Self-perceived health in older Europeans: Does the choice of survey matter?
Croezen, Simone; Burdorf, Alex
2016-01-01
Abstract Background: Cross-national comparisons of health in European countries provide crucial information to monitor health and disease within and between countries and to inform policy and research priorities. However, variations in estimates might occur when information from cross-national European surveys with different characteristics are used. We compared the prevalence of very good or good self-perceived health across 10 European countries according to three European surveys and investigated which survey characteristics contributed to differences in prevalence estimates. Methods: We used aggregate data from 2004 to 2005 of respondents aged 55–64 years from the European Union Statistics on Income and Living Conditions (EU-SILC), the Survey of Health, Ageing and Retirement in Europe (SHARE) and the European Social Survey (ESS). Across the surveys, self-perceived health was assessed by the same question with response options ranging from very good to very bad. Results: Despite a good correlation between the surveys (intraclass correlation coefficient: 0.77), significant differences were found in prevalence estimates of very good or good self-perceived health. The survey response, sample size and survey mode contributed statistically significantly to the differences between the surveys. Multilevel linear regression analyses, adjusted for survey characteristics, showed a higher prevalence for SHARE (+6.96, 95% CIs: 3.14 to 10.8) and a lower prevalence (−3.12; 95% CIs: −7.11 to 0.86) for ESS, with EU-SILC as the reference survey. Conclusion: Three important health surveys in Europe showed substantial differences for presence of very good or good self-perceived health. These differences limit the usefulness for direct comparisons across studies in health policies for Europe. PMID:26989125
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
Sun, Kangfeng; Ji, Fenzhu; Yan, Xiaoyu; Jiang, Kai; Yang, Shichun
2018-01-01
As NOx emissions legislation for Diesel-engines is becoming more stringent than ever before, an aftertreatment system has been widely used in many countries. Specifically, to reduce the NOx emissions, a selective catalytic reduction(SCR) system has become one of the most promising techniques for Diesel-engine vehicle applications. In the SCR system, input ammonia concentration and ammonia coverage ratio are regarded as essential states in the control-oriental model. Currently, an ammonia sensor placed before the SCR Can is a good strategy for the input ammonia concentration value. However, physical sensor would increase the SCR system cost and the ammonia coverage ratio information cannot be directly measured by physical sensor. Aiming to tackle this problem, an observer based on particle filter(PF) is investigated to estimate the input ammonia concentration and ammonia coverage ratio. Simulation results through the experimentally-validated full vehicle simulator cX-Emission show that the performance of observer based on PF is outstanding, and the estimation error is very small.
Ji, Fenzhu; Yan, Xiaoyu; Jiang, Kai
2018-01-01
As NOx emissions legislation for Diesel-engines is becoming more stringent than ever before, an aftertreatment system has been widely used in many countries. Specifically, to reduce the NOx emissions, a selective catalytic reduction(SCR) system has become one of the most promising techniques for Diesel-engine vehicle applications. In the SCR system, input ammonia concentration and ammonia coverage ratio are regarded as essential states in the control-oriental model. Currently, an ammonia sensor placed before the SCR Can is a good strategy for the input ammonia concentration value. However, physical sensor would increase the SCR system cost and the ammonia coverage ratio information cannot be directly measured by physical sensor. Aiming to tackle this problem, an observer based on particle filter(PF) is investigated to estimate the input ammonia concentration and ammonia coverage ratio. Simulation results through the experimentally-validated full vehicle simulator cX-Emission show that the performance of observer based on PF is outstanding, and the estimation error is very small. PMID:29408924
Association between basic numerical abilities and mathematics achievement.
Sasanguie, Delphine; De Smedt, Bert; Defever, Emmy; Reynvoet, Bert
2012-06-01
Various measures have been used to investigate number processing in children, including a number comparison or a number line estimation task. The present study aimed to examine whether and to which extent these different measures of number representation are related to performance on a curriculum-based standardized mathematics achievement test in kindergarteners, first, second, and sixth graders. Children completed a number comparison task and a number line estimation task with a balanced set of symbolic (Arabic digits) and non-symbolic (dot patterns) stimuli. Associations with mathematics achievement were observed for the symbolic measures. Although the association with number line estimation was consistent over grades, the association with number comparison was much stronger in kindergarten compared to the other grades. The current data indicate that a good knowledge of the numerical meaning of Arabic digits is important for children's mathematical development and that particularly the access to the numerical meaning of symbolic digits rather than the representation of number per se is important. © 2011 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Veselovskii, I.; Whiteman, D. N.; Korenskiy, M.; Kolgotin, A.; Dubovik, O.; Perez-Ramirez, D.; Suvorina, A.
2013-01-01
The results of the application of the linear estimation technique to multiwavelength Raman lidar measurements performed during the summer of 2011 in Greenbelt, MD, USA, are presented. We demonstrate that multiwavelength lidars are capable not only of providing vertical profiles of particle properties but also of revealing the spatio-temporal evolution of aerosol features. The nighttime 3 Beta + 1 alpha lidar measurements on 21 and 22 July were inverted to spatio-temporal distributions of particle microphysical parameters, such as volume, number density, effective radius and the complex refractive index. The particle volume and number density show strong variation during the night, while the effective radius remains approximately constant. The real part of the refractive index demonstrates a slight decreasing tendency in a region of enhanced extinction coefficient. The linear estimation retrievals are stable and provide time series of particle parameters as a function of height at 4 min resolution. AERONET observations are compared with multiwavelength lidar retrievals showing good agreement.
Optical Estimation of Depth and Current in a Ebb Tidal Delta Environment
NASA Astrophysics Data System (ADS)
Holman, R. A.; Stanley, J.
2012-12-01
A key limitation to our ability to make nearshore environmental predictions is the difficulty of obtaining up-to-date bathymetry measurements at a reasonable cost and frequency. Due to the high cost and complex logistics of in-situ methods, research into remote sensing approaches has been steady and has finally yielded fairly robust methods like the cBathy algorithm for optical Argus data that show good performance on simple barred beach profiles and near immunity to noise and signal problems. In May, 2012, data were collected in a more complex ebb tidal delta environment during the RIVET field experiment at New River Inlet, NC. The presence of strong reversing tidal currents led to significant errors in cBathy depths that were phase-locked to the tide. In this paper we will test methods for the robust estimation of both depths and vector currents in a tidal delta domain. In contrast to previous Fourier methods, wavenumber estimation in cBathy can be done on small enough scales to resolve interesting nearshore features.
Skylab S-193 radar altimeter experiment analyses and results
NASA Technical Reports Server (NTRS)
Brown, G. S. (Editor)
1977-01-01
The design of optimum filtering procedures for geoid recovery is discussed. Statistical error bounds are obtained for pointing angle estimates using average waveform data. A correlation of tracking loop bandwidth with magnitude of pointing error is established. The impact of ocean currents and precipitation on the received power are shown to be measurable effects. For large sea state conditions, measurements of sigma 0 deg indicate a distinct saturation level of about 8 dB. Near-nadir less than 15 deg values of sigma 0 deg are also presented and compared with theoretical models. Examination of Great Salt Lake Desert scattering data leads to rejection of a previously hypothesized specularly reflecting surface. Pulse-to-pulse correlation results are in agreement with quasi-monochromatic optics theoretical predictions and indicate a means for estimating direction of pointing error. Pulse compression techniques for and results of estimating significant waveheight from waveform data are presented and are also shown to be in good agreement with surface truth data. A number of results pertaining to system performance are presented.
Human behavioral complexity peaks at age 25
Brugger, Peter
2017-01-01
Random Item Generation tasks (RIG) are commonly used to assess high cognitive abilities such as inhibition or sustained attention. They also draw upon our approximate sense of complexity. A detrimental effect of aging on pseudo-random productions has been demonstrated for some tasks, but little is as yet known about the developmental curve of cognitive complexity over the lifespan. We investigate the complexity trajectory across the lifespan of human responses to five common RIG tasks, using a large sample (n = 3429). Our main finding is that the developmental curve of the estimated algorithmic complexity of responses is similar to what may be expected of a measure of higher cognitive abilities, with a performance peak around 25 and a decline starting around 60, suggesting that RIG tasks yield good estimates of such cognitive abilities. Our study illustrates that very short strings of, i.e., 10 items, are sufficient to have their complexity reliably estimated and to allow the documentation of an age-dependent decline in the approximate sense of complexity. PMID:28406953
A Muscle Fibre Conduction Velocity Tracking ASIC for Local Fatigue Monitoring.
Koutsos, Ermis; Cretu, Vlad; Georgiou, Pantelis
2016-12-01
Electromyography analysis can provide information about a muscle's fatigue state by estimating Muscle Fibre Conduction Velocity (MFCV), a measure of the travelling speed of Motor Unit Action Potentials (MUAPs) in muscle tissue. MFCV better represents the physical manifestations of muscle fatigue, compared to the progressive compression of the myoelectic Power Spectral Density, hence it is more suitable for a muscle fatigue tracking system. This paper presents a novel algorithm for the estimation of MFCV using single threshold bit-stream conversion and a dedicated application-specified integrated circuit (ASIC) for its implementation, suitable for a compact, wearable and easy to use muscle fatigue monitor. The presented ASIC is implemented in a commercially available AMS 0.35 [Formula: see text] CMOS technology and utilizes a bit-stream cross-correlator that estimates the conduction velocity of the myoelectric signal in real time. A test group of 20 subjects was used to evaluate the performance of the developed ASIC, achieving good accuracy with an error of only 3.2% compared to Matlab.
Estimation of available global solar radiation using sunshine duration over South Korea
NASA Astrophysics Data System (ADS)
Das, Amrita; Park, Jin-ki; Park, Jong-hwa
2015-11-01
Besides designing a solar energy system, accurate insolation data is also a key component for many biological and atmospheric studies. But solar radiation stations are not widely available due to financial and technical limitations; this insufficient number affects the spatial resolution whenever an attempt is made to construct a solar radiation map. There are several models in literature for estimating incoming solar radiation using sunshine fraction. Seventeen of such models among which 6 are linear and 11 non-linear, have been chosen for studying and estimating solar radiation on a horizontal surface over South Korea. The better performance of a non-linear model signifies the fact that the relationship between sunshine duration and clearness index does not follow a straight line. With such a model solar radiation over 79 stations measuring sunshine duration is computed and used as input for spatial interpolation. Finally monthly solar radiation maps are constructed using the Ordinary Kriging method. The cross validation results show good agreement between observed and predicted data.
Downward Atmospheric Longwave Radiation in the City of Sao Paulo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbaro, Eduardo W.; Oliveira, Amauri P.; Soares, Jacyra
2009-03-11
This work evaluates objectively the consistency and quality of a 9 year dataset based on 5 minute average values of downward longwave atmospheric (LW) emission, shortwave radiation, temperature and relative humidity. All these parameters were observed simultaneously and continuously from 1997 to 2006 in the IAG micrometeorological platform, located at the top of the IAG-USP building. The pyrgeometer dome emission effect was removed using neural network technique reducing the downward long wave atmospheric emission error to 3.5%. The comparison, between the monthly average values of LW emission observed in Sao Paulo and satellite estimates from SRB-NASA project, indicated a verymore » good agreement. Furthermore, this work investigates the performance of 10 empirical expressions to estimate the LW emission at the surface. The comparison between the models indicates that Brunt's one presents the better results, with smallest ''MBE,''''RMSE'' and biggest ''d'' index of agreement, therefore Brunt is the most indicated model to estimate LW emission under clear sky conditions in the city of Sao Paulo.« less
Madec, Simon; Baret, Fred; de Solan, Benoît; Thomas, Samuel; Dutartre, Dan; Jezequel, Stéphane; Hemmerlé, Matthieu; Colombeau, Gallian; Comar, Alexis
2017-01-01
The capacity of LiDAR and Unmanned Aerial Vehicles (UAVs) to provide plant height estimates as a high-throughput plant phenotyping trait was explored. An experiment over wheat genotypes conducted under well watered and water stress modalities was conducted. Frequent LiDAR measurements were performed along the growth cycle using a phénomobile unmanned ground vehicle. UAV equipped with a high resolution RGB camera was flying the experiment several times to retrieve the digital surface model from structure from motion techniques. Both techniques provide a 3D dense point cloud from which the plant height can be estimated. Plant height first defined as the z -value for which 99.5% of the points of the dense cloud are below. This provides good consistency with manual measurements of plant height (RMSE = 3.5 cm) while minimizing the variability along each microplot. Results show that LiDAR and structure from motion plant height values are always consistent. However, a slight under-estimation is observed for structure from motion techniques, in relation with the coarser spatial resolution of UAV imagery and the limited penetration capacity of structure from motion as compared to LiDAR. Very high heritability values ( H 2 > 0.90) were found for both techniques when lodging was not present. The dynamics of plant height shows that it carries pertinent information regarding the period and magnitude of the plant stress. Further, the date when the maximum plant height is reached was found to be very heritable ( H 2 > 0.88) and a good proxy of the flowering stage. Finally, the capacity of plant height as a proxy for total above ground biomass and yield is discussed.
Different methodologies to quantify uncertainties of air emissions.
Romano, Daniela; Bernetti, Antonella; De Lauretis, Riccardo
2004-10-01
Characterization of the uncertainty associated with air emission estimates is of critical importance especially in the compilation of air emission inventories. In this paper, two different theories are discussed and applied to evaluate air emissions uncertainty. In addition to numerical analysis, which is also recommended in the framework of the United Nation Convention on Climate Change guidelines with reference to Monte Carlo and Bootstrap simulation models, fuzzy analysis is also proposed. The methodologies are discussed and applied to an Italian example case study. Air concentration values are measured from two electric power plants: a coal plant, consisting of two boilers and a fuel oil plant, of four boilers; the pollutants considered are sulphur dioxide (SO(2)), nitrogen oxides (NO(X)), carbon monoxide (CO) and particulate matter (PM). Monte Carlo, Bootstrap and fuzzy methods have been applied to estimate uncertainty of these data. Regarding Monte Carlo, the most accurate results apply to Gaussian distributions; a good approximation is also observed for other distributions with almost regular features either positive asymmetrical or negative asymmetrical. Bootstrap, on the other hand, gives a good uncertainty estimation for irregular and asymmetrical distributions. The logic of fuzzy analysis, where data are represented as vague and indefinite in opposition to the traditional conception of neatness, certain classification and exactness of the data, follows a different description. In addition to randomness (stochastic variability) only, fuzzy theory deals with imprecision (vagueness) of data. Fuzzy variance of the data set was calculated; the results cannot be directly compared with empirical data but the overall performance of the theory is analysed. Fuzzy theory may appear more suitable for qualitative reasoning than for a quantitative estimation of uncertainty, but it suits well when little information and few measurements are available and when distributions of data are not properly known.
Madec, Simon; Baret, Fred; de Solan, Benoît; Thomas, Samuel; Dutartre, Dan; Jezequel, Stéphane; Hemmerlé, Matthieu; Colombeau, Gallian; Comar, Alexis
2017-01-01
The capacity of LiDAR and Unmanned Aerial Vehicles (UAVs) to provide plant height estimates as a high-throughput plant phenotyping trait was explored. An experiment over wheat genotypes conducted under well watered and water stress modalities was conducted. Frequent LiDAR measurements were performed along the growth cycle using a phénomobile unmanned ground vehicle. UAV equipped with a high resolution RGB camera was flying the experiment several times to retrieve the digital surface model from structure from motion techniques. Both techniques provide a 3D dense point cloud from which the plant height can be estimated. Plant height first defined as the z-value for which 99.5% of the points of the dense cloud are below. This provides good consistency with manual measurements of plant height (RMSE = 3.5 cm) while minimizing the variability along each microplot. Results show that LiDAR and structure from motion plant height values are always consistent. However, a slight under-estimation is observed for structure from motion techniques, in relation with the coarser spatial resolution of UAV imagery and the limited penetration capacity of structure from motion as compared to LiDAR. Very high heritability values (H2> 0.90) were found for both techniques when lodging was not present. The dynamics of plant height shows that it carries pertinent information regarding the period and magnitude of the plant stress. Further, the date when the maximum plant height is reached was found to be very heritable (H2> 0.88) and a good proxy of the flowering stage. Finally, the capacity of plant height as a proxy for total above ground biomass and yield is discussed. PMID:29230229
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.
2018-03-01
We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.
Development of ultracapacitor modules for 42-V automotive electrical systems
NASA Astrophysics Data System (ADS)
Jung, Do Yang; Kim, Young Ho; Kim, Sun Wook; Lee, Suck-Hyun
Two types of ultracapacitor modules have been developed for use as energy-storage devices for 42-V systems in automobiles. The modules show high performance and good reliability in terms of discharge and recharge capability, long-term endurance, and high energy and power. During a 42-V system simulation test of 6-kW power boosting/regenerative braking, the modules demonstrate very good performance. In high-power applications such as 42-V and hybrid vehicle systems, ultracapacitors have many merits compared with batteries, especially with respect to specific power at high rate, thermal stability, charge-discharge efficiency, and cycle-life. Ultracapacitors are also very safe, reliable and environmentally friendly. The cost of ultracapacitors is still high compared with batteries because of the low production scale, but is decreasing very rapidly. It is estimated that the cost of ultracapacitors will decrease to US$ 300 per 42-V module in the near future. Also, the maintenance cost of the ultracapacitor is nearly zero because of its high cycle-life. Therefore, the combined cost of the capacitor and maintenance will be lower than that of batteries in the near future. Overall, comparing performance, price and other parameters of ultracapacitors with batteries, ultracapacitors are the most likely candidate for energy-storage in 42-V systems.
Linear quadratic Gaussian and feedforward controllers for the DSS-13 antenna
NASA Technical Reports Server (NTRS)
Gawronski, W. K.; Racho, C. S.; Mellstrom, J. A.
1994-01-01
The controller development and the tracking performance evaluation for the DSS-13 antenna are presented. A trajectory preprocessor, linear quadratic Gaussian (LQG) controller, feedforward controller, and their combination were designed, built, analyzed, and tested. The antenna exhibits nonlinear behavior when the input to the antenna and/or the derivative of this input exceeds the imposed limits; for slewing and acquisition commands, these limits are typically violated. A trajectory preprocessor was designed to ensure that the antenna behaves linearly, just to prevent nonlinear limit cycling. The estimator model for the LQG controller was identified from the data obtained from the field test. Based on an LQG balanced representation, a reduced-order LQG controller was obtained. The feedforward controller and the combination of the LQG and feedforward controller were also investigated. The performance of the controllers was evaluated with the tracking errors (due to following a trajectory) and the disturbance errors (due to the disturbances acting on the antenna). The LQG controller has good disturbance rejection properties and satisfactory tracking errors. The feedforward controller has small tracking errors but poor disturbance rejection properties. The combined LQG and feedforward controller exhibits small tracking errors as well as good disturbance rejection properties. However, the cost for this performance is the complexity of the controller.
Ma, Lin; Xu, Yubin
2015-01-01
Green WLAN is a promising technique for accessing future indoor Internet services. It is designed not only for high-speed data communication purposes but also for energy efficiency. The basic strategy of green WLAN is that all the access points are not always powered on, but rather work on-demand. Though powering off idle access points does not affect data communication, a serious asymmetric matching problem will arise in a WLAN indoor positioning system due to the fact the received signal strength (RSS) readings from the available access points are different in their offline and online phases. This asymmetry problem will no doubt invalidate the fingerprint algorithm used to estimate the mobile device location. Therefore, in this paper we propose a green WLAN indoor positioning system, which can recover RSS readings and achieve good localization performance based on singular value thresholding (SVT) theory. By solving the nuclear norm minimization problem, SVT recovers not only the radio map, but also online RSS readings from a sparse matrix by sensing only a fraction of the RSS readings. We have implemented the method in our lab and evaluated its performances. The experimental results indicate the proposed system could recover the RSS readings and achieve good localization performance. PMID:25587977
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Beef quality parameters estimation using ultrasound and color images
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2012-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2013-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.
van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H
2012-10-01
Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.
NASA Astrophysics Data System (ADS)
Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy
Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.
A Spatially-Explicit Technique for Evaluation of Alternative ...
Ecosystems contribute to maintaining human well-being directly through provision of goods and indirectly through provision of services that support clean water, clean air, flood protection and atmospheric stability. Transparently accounting for biophysical attributes from which humans derive benefit is essential to support dialog among the public, resource managers, decision makers, and scientists. We analyzed the potential ecosystem goods and services production from alternative future land use scenarios in the US Tampa Bay region. Ecosystem goods and service metrics included carbon sequestration, nitrogen removal, air pollutant removal, and stormwater retention. Each scenario was compared to a 2006 baseline land use. Estimated production of denitrification services changed by 28% and carbon sequestration by 20% between 2006 and the “business as usual” scenario. An alternative scenario focused on “natural resource protection” resulted in an estimated 9% loss in air pollution removal. Stormwater retention was estimated to change 18% from 2006 to 2060 projections. Cost effective areas for conservation, almost 1588 ha, beyond current conservation lands, were identified by comparing ecosystem goods and services production to assessed land values. Our ecosystem goods and services approach provides a simple and quantitative way to examine a more complete set of potential outcomes from land use decisions. This study demonstrates an approach for spatially expli
Development of neuraminidase detection using gold nanoparticles boron-doped diamond electrodes.
Wahyuni, Wulan T; Ivandini, Tribidasari A; Saepudin, Endang; Einaga, Yasuaki
2016-03-15
Gold nanoparticles-modified boron-doped diamond (AuNPs-BDD) electrodes, which were prepared with a self-assembly deposition of AuNPs at amine-terminated boron-doped diamond, were examined for voltammetric detection of neuraminidase (NA). The detection method was performed based on the difference of electrochemical responses of zanamivir at gold surface before and after the reaction with NA in phosphate buffer solution (PBS, pH 5.5). A linear calibration curve for zanamivir in 0.1 M PBS in the absence of NA was achieved in the concentration range of 1 × 10(-6) to 1 × 10(-5) M (R(2) = 0.99) with an estimated limit of detection (LOD) of 2.29 × 10(-6) M. Furthermore, using its reaction with 1.00 × 10(-5) M zanamivir, a linear calibration curve of NA can be obtained in the concentration range of 0-12 mU (R(2) = 0.99) with an estimated LOD of 0.12 mU. High reproducibility was shown with a relative standard deviation (RSD) of 1.14% (n = 30). These performances could be maintained when the detection was performed in mucin matrix. Comparison performed using gold-modified BDD (Au-BDD) electrodes suggested that the good performance of the detection method is due to the stability of the gold particles position at the BDD surface. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling Speed-Accuracy Tradeoff in Adaptive System for Practicing Estimation
ERIC Educational Resources Information Center
Nižnan, Juraj
2015-01-01
Estimation is useful in situations where an exact answer is not as important as a quick answer that is good enough. A web-based adaptive system for practicing estimates is currently being developed. We propose a simple model for estimating student's latent skill of estimation. This model combines a continuous measure of correctness and response…
Uniform and nonuniform V-shaped planar arrays for 2-D direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Filik, T.; Tuncer, T. E.
2009-10-01
In this paper, isotropic and directional uniform and nonuniform V-shaped arrays are considered for azimuth and elevation direction-of-arrival (DOA) angle estimation simultaneously. It is shown that the uniform isotropic V-shaped arrays (UI V arrays) have no angle coupling between the azimuth and elevation DOA. The design of the UI V arrays is investigated, and closed form expressions are presented for the parameters of the UI V arrays and nonuniform V arrays. These expressions allow one to find the isotropic V angle for different array types. The DOA performance of the UI V array is compared with the uniform circular array (UCA) for correlated signals and in case of mutual coupling between array elements. The modeling error for the sensor positions is also investigated. It is shown that V array and circular array have similar robustness for the position errors while the performance of UI V array is better than the UCA for correlated source signals and when there is mutual coupling. Nonuniform V-shaped isotropic arrays are investigated which allow good DOA performance with limited number of sensors. Furthermore, a new design method for the directional V-shaped arrays is proposed. This method is based on the Cramer-Rao Bound for joint estimation where the angle coupling effect between the azimuth and elevation DOA angles is taken into account. The design method finds an optimum angle between the linear subarrays of the V array. The proposed method can be used to obtain directional arrays with significantly better DOA performance.
A parallel calibration utility for WRF-Hydro on high performance computers
NASA Astrophysics Data System (ADS)
Wang, J.; Wang, C.; Kotamarthi, V. R.
2017-12-01
A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.
Multispectral processing based on groups of resolution elements
NASA Technical Reports Server (NTRS)
Richardson, W.; Gleason, J. M.
1975-01-01
Several nine-point rules are defined and compared with previously studied rules. One of the rules performed well in boundary areas, but with reduced efficiency in field interiors; another combined best performance on field interiors with good sensitivity to boundary detail. The basic threshold gradient and some modifications were investigated as a means of boundary point detection. The hypothesis testing methods of closed-boundary formation were also tested and evaluated. An analysis of the boundary detection problem was initiated, employing statistical signal detection and parameter estimation techniques to analyze various formulations of the problem. These formulations permit the atmospheric and sensor system effects on the data to be thoroughly analyzed. Various boundary features and necessary assumptions can also be investigated in this manner.
Hard X-ray multilayer zone plate with 25-nm outermost zone width
NASA Astrophysics Data System (ADS)
Takano, H.; Sumida, K.; Hirotomo, H.; Koyama, T.; Ichimaru, S.; Ohchi, T.; Takenaka, H.; Kagoshima, Y.
2017-06-01
We have improved the performance of a previously reported multilayer zone plate by reducing its outermost zone width, using the same multilayer materials (MoSi2 and Si) and fabrication technique. The focusing performance was evaluated at the BL24XU of SPring-8 using 20-keV X-rays. The line spread function (LSF) in the focal plane was measured using a dark-field knife-edge scan method, and the point spread function was obtained from the LSF through a tomographic reconstruction principle. The spatial resolution was estimated to be 30 nm, which is in relatively good agreement with the calculated diffraction-limited value of 25 nm, while the measured diffraction efficiency of the +1st order was 24%.
Evaluation of satellite rainfall estimates for drought and flood monitoring in Mozambique
Tote, Carolien; Patricio, Domingos; Boogaard, Hendrik; van der Wijngaart, Raymond; Tarnavsky, Elena; Funk, Christopher C.
2015-01-01
Satellite derived rainfall products are useful for drought and flood early warning and overcome the problem of sparse, unevenly distributed and erratic rain gauge observations, provided their accuracy is well known. Mozambique is highly vulnerable to extreme weather events such as major droughts and floods and thus, an understanding of the strengths and weaknesses of different rainfall products is valuable. Three dekadal (10-day) gridded satellite rainfall products (TAMSAT African Rainfall Climatology And Time-series (TARCAT) v2.0, Famine Early Warning System NETwork (FEWS NET) Rainfall Estimate (RFE) v2.0, and Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS)) are compared to independent gauge data (2001–2012). This is done using pairwise comparison statistics to evaluate the performance in estimating rainfall amounts and categorical statistics to assess rain-detection capabilities. The analysis was performed for different rainfall categories, over the seasonal cycle and for regions dominated by different weather systems. Overall, satellite products overestimate low and underestimate high dekadal rainfall values. The RFE and CHIRPS products perform as good, generally outperforming TARCAT on the majority of statistical measures of skill. TARCAT detects best the relative frequency of rainfall events, while RFE underestimates and CHIRPS overestimates the rainfall events frequency. Differences in products performance disappear with higher rainfall and all products achieve better results during the wet season. During the cyclone season, CHIRPS shows the best results, while RFE outperforms the other products for lower dekadal rainfall. Products blending thermal infrared and passive microwave imagery perform better than infrared only products and particularly when meteorological patterns are more complex, such as over the coastal, central and south regions of Mozambique, where precipitation is influenced by frontal systems.
Reproducibility of the six-minute walking test in chronic heart failure patients.
Pinna, G D; Opasich, C; Mazza, A; Tangenti, A; Maestri, R; Sanarico, M
2000-11-30
The six-minute walking test (WT) is used in trials and clinical practice as an easy tool to evaluate the functional capacity of chronic heart failure (CHF) patients. As WT measurements are highly variable both between and within individuals, this study aims at assessing the contribution of the different sources of variation and estimating the reproducibility of the test. A statistical model describing WT measurements as a function of fixed and random effects is proposed and its parameters estimated. We considered 202 stable CHF patients who performed two baseline WTs separated by a 30 minute rest; 49 of them repeated the two tests 3 months later (follow-up control). They had no changes in therapy or major clinical events. Another 31 subjects performed two baseline tests separated by 24 hours. Collected data were analysed using a mixed model methodology. There was no significant difference between measurements taken 30 minutes and 24 hours apart (p = 0.99). A trend effect of 17 (1.4) m (mean (SE)) was consistently found between duplicate tests (p < 0.001). REML estimates of variance components were: 5189 (674) for subject differences in the error-free value; 1280 (304) for subject differences in spontaneous clinical evolution between baseline and follow-up control, and 266 (23) for the within-subject error. Hence, the standard error of measurement was 16.3 m, namely 4 per cent of the average WT performance (403 m) in this sample. The intraclass correlation coefficient was 0.96. We conclude that WT measurements are characterized by good intrasubject reproducibility and excellent reliability. When follow-up studies > or = 3 months are performed, unpredictable changes in individual walking performance due to spontaneous clinical evolution are to be expected. Their clinical significance, however, is not known. Copyright 2000 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Tudor, Magdalena
IATA has estimated, in 2012, at about 2% of global carbon dioxide emissions, the environmental impact of the air transport, as a consequence caused by the rapidly growing of global movement demand of people and goods, and which was effectively taken into account in the development of the aviation industry. The historic achievements of scientific and technical progress in the field of commercial aviation were contributed to this estimate, and even today the research continues to make progress to help to reduce the emissions of greenhouse gases. Advances in commercial aircraft, and its engine design technology had the aim to improve flight performance. These improvements have enhanced the global flight planning of these types of aircrafts. Almost all of these advances rely on generated performance data as reference sources, the most of which are classified as "confidential" by the aircraft manufacturers. There are very few aero-propulsive models conceived for the climb regime in the literature, but none of them was designed without access to an engine database, and/or to performance data in climb and in cruise regimes with direct applicability for flight optimization. In this thesis, aero-propulsive models methodologies are proposed for climb and cruise regimes, using system identification and validation methods, through which airplane performance can be computed and stored in the most compact and easily accessible format for this kind of performance data. The acquiring of performance data in this format makes it possible to optimize flight profiles, used by on-board Flight Management Systems. The aero-propulsive models developed here were investigated on two aircrafts belonging to commercial class, and both of them had offered very good accuracy. One of their advantages is that they can be adapted to any other aircraft of the same class, even if there is no access to their corresponding engine flight data. In addition, these models could save airlines a considerable amount of money, given the fact that the number of flight tests could be drastically reduced. Lastly, academia, thus the laboratory of applied research in active control, avionics and aeroservoelasticity (LARCASE) team is gaining direct access to these aircraft performance data to obtain experience in novel optimization algorithms of flight profiles.
NASA Technical Reports Server (NTRS)
Stoll, F.; Koenig, D. G.
1983-01-01
Data obtained through very high angles of attack from a large-scale, subsonic wind-tunnel test of a close-coupled canard-delta-wing fighter model are analyzed. The canard delays wing leading-edge vortex breakdown, even for angles of attack at which the canard is completely stalled. A vortex-lattice method was applied which gave good predictions of lift and pitching moment up to an angle of attack of about 20 deg, where vortex-breakdown effects on performance become significant. Pitch-control inputs generally retain full effectiveness up to the angle of attack of maximum lift, beyond which, effectiveness drops off rapidly. A high-angle-of-attack prediction method gives good estimates of lift and drag for the completely stalled aircraft. Roll asymmetry observed at zero sideslip is apparently caused by an asymmetry in the model support structure.
Analysis of two-equation turbulence models for recirculating flows
NASA Technical Reports Server (NTRS)
Thangam, S.
1991-01-01
The two-equation kappa-epsilon model is used to analyze turbulent separated flow past a backward-facing step. It is shown that if the model constraints are modified to be consistent with the accepted energy decay rate for isotropic turbulence, the dominant features of the flow field, namely the size of the separation bubble and the streamwise component of the mean velocity, can be accurately predicted. In addition, except in the vicinity of the step, very good predictions for the turbulent shear stress, the wall pressure, and the wall shear stress are obtained. The model is also shown to provide good predictions for the turbulence intensity in the region downstream of the reattachment point. Estimated long time growth rates for the turbulent kinetic energy and dissipation rate of homogeneous shear flow are utilized to develop an optimal set of constants for the two equation kappa-epsilon model. The physical implications of the model performance are also discussed.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Lang, S.; Simpson, J.; Olson, W. S.; Johnson, D.; Ferrier, B.; Kummerow, C.; Adler, R.
1999-01-01
Latent heating profiles associated with three (TOGA COARE) Tropical Ocean and Global Atmosphere Coupled Ocean Atmosphere Response Experiment active convective episodes (December 10-17 1992; December 19-27 1992; and February 9-13 1993) are examined using the Goddard Cumulus Ensemble (GCE) Model and retrieved by using the Goddard Convective and Stratiform Heating (CSH) algorithm . The following sources of rainfall information are input into the CSH algorithm: Special Sensor Microwave Imager (SSM/1), Radar and the GCE model. Diagnostically determined latent heating profiles calculated using 6 hourly soundings are used for validation. The GCE model simulated rainfall and latent heating profiles are in excellent agreement with those estimated by soundings. In addition, the typical convective and stratiform heating structures (or shapes) are well captured by the GCE model. Radar measured rainfall is smaller than that both estimated by the GCE model and SSM/I in all three different COARE IFA periods. SSM/I derived rainfall is more than the GCE model simulated for the December 19-27 and February 9-13 periods, but is in excellent agreement with the GCE model for the December 10-17 period. The GCE model estimated stratiform amount is about 50% for December 19-27, 42% for December 11-17 and 56% for the February 9-13 case. These results are consistent with large-scale analyses. The accurate estimates of stratiform amount is needed for good latent heating retrieval. A higher (lower) percentage of stratiform rain can imply a maximum heating rate at a higher (lower) altitude. The GCE model always simulates more stratiform rain (10 to 20%) than the radar for all three convective episodes. SSM/I derived stratiform amount is about 37% for December 19-27, 48% for December 11-17 and 41% for the February 9-13 case. Temporal variability of CSH algorithm retrieved latent heating profiles using either GCE model simulated or radar estimated rainfall and stratiform amount is in good agreement with that diagnostically determined for all three periods. However, less rainfall and a smaller stratiform percentage estimated by radar resulted in a weaker (underestimated) latent heating profile and a lower maximum latent heating level compared to those determined diagnostically. Rainfall information from SSM/I can not retrieve individual convective events due to poor temporal sampling. Nevertheless, this study suggests that a good 4r, rainfall retrieval from SSM/I for a convective event always leads to a good latent heating retrieval. Sensitivity testing has been performed and the results indicate that the SSM/I derived time averaged stratiform amount may be underestimated for December 19-27. Time averaged heating profiles derived from SSM/I, however, are not in bad agreement with those derived by soundings for the December 10-17 convective period. The heating retrievals may be more accurate for longer time scales provided there is no bias in the sampling.
DeepQA: improving the estimation of single protein model quality with deep belief networks.
Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin
2016-12-05
Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .
Four applications of permutation methods to testing a single-mediator model.
Taylor, Aaron B; MacKinnon, David P
2012-09-01
Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.
Hurst, David; Sharpe, Sharon; Yeager, Valerie A
We assessed whether administrative preparedness processes that were intended to expedite the acquisition of goods and services during a public health emergency affect estimated procurement and contracting cycle times. We obtained data from 2014-2015 applications to the Hospital Preparedness Program and Public Health Emergency Preparedness (HPP-PHEP) cooperative agreements. We compared the estimated procurement and contracting cycle times of 61 HPP-PHEP awardees that did and did not have certain administrative processes in place. Certain processes, such as statutes allowing for procuring and contracting on the open market, had an effect on reducing the estimated cycle times for obtaining goods and services. Other processes, such as cooperative purchasing agreements, also had an effect on estimated procurement time. For example, awardees with statutes that permitted them to obtain goods and services in the open market had an average procurement cycle time of 6 days; those without such statutes had a cycle time of 17 days ( P = .04). PHEP awardees should consider adopting these or similar processes in an effort to reduce cycle times.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
A hybrid localization technique for patient tracking.
Rodionov, Denis; Kolev, George; Bushminkin, Kirill
2013-01-01
Nowadays numerous technologies are employed for tracking patients and assets in hospitals or nursing homes. Each of them has advantages and drawbacks. For example, WiFi localization has relatively good accuracy but cannot be used in case of power outage or in the areas with poor WiFi coverage. Magnetometer positioning or cellular network does not have such problems but they are not as accurate as localization with WiFi. This paper describes technique that simultaneously employs different localization technologies for enhancing stability and average accuracy of localization. The proposed algorithm is based on fingerprinting method paired with data fusion and prediction algorithms for estimating the object location. The core idea of the algorithm is technology fusion using error estimation methods. For testing accuracy and performance of the algorithm testing simulation environment has been implemented. Significant accuracy improvement was showed in practical scenarios.
NASA Astrophysics Data System (ADS)
Olurotimi, E. O.; Sokoya, O.; Ojo, J. S.; Owolawi, P. A.
2018-03-01
Rain height is one of the significant parameters for prediction of rain attenuation for Earth-space telecommunication links, especially those operating at frequencies above 10 GHz. This study examines Three-parameter Dagum distribution of the rain height over Durban, South Africa. 5-year data were used to study the monthly, seasonal, and annual variations using the parameters estimated by the maximum likelihood of the distribution. The performance estimation of the distribution was determined using the statistical goodness of fit. Three-parameter Dagum distribution shows an appropriate distribution for the modeling of rain height over Durban with the Root Mean Square Error of 0.26. Also, the shape and scale parameters for the distribution show a wide variation. The probability exceedance of time for 0.01% indicates the high probability of rain attenuation at higher frequencies.
Theoretical Discussion of Electron Transport Rate Constant at TCNQ / Ge and TiO2 System
NASA Astrophysics Data System (ADS)
Al-agealy, Hadi J. M.; Alshafaay, B.; Hassooni, Mohsin A.; Ashwiekh, Ahmed M.; Sadoon, Abbas K.; Majeed, Raad H.; Ghadhban, Rawnaq Q.; Mahdi, Shatha H.
2018-05-01
We have been studying and estimation the electronic transport constant at TCNQ / Ge and Tio2 interface by means of tunneling potential (TP), transport energy reorientation (TER), driving transition energy DTE and coupling coefficient constant. A simple quantum model for the transition processes was adapted to estimation and analysis depending on the quantum state for donor state |α D > and acceptor stated |α A > and assuming continuum levels of the system. Evaluation results were performed for the surfaces of Ge and Tio2 as best as for multilayer TCNQ. The results show an electronic transfer feature for electronic TCNQ density of states and a semiconductor behavior. The electronic rate constant result for both systems shows a good tool to election system in applied devices. All these results indicate the
Commercial Building Motor Protection Response Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Daniel P.; Kueck, John
2015-06-17
When voltages recover, motors may immediately reenergize and reaccelerate, or delay for a few minutes, or stay stalled. The estimated motor response is given for both the voltage sag magnitude and voltage sag duration. These response estimates are based on experience and available test data. Good data is available for voltage sag response for many components such as relays and contactors, but little data is available for both voltage sag and recovery response. The tables in Appendix A include data from recent voltage sag and recovery tests performed by SCE and BPA on air conditioners and energy management systems. Themore » response of the motor can vary greatly depending on the type of protection and control. The time duration for the voltage sag consists of those times that are of interest for bulk power system modelers.« less
NASA Astrophysics Data System (ADS)
Oyaga Landa, Francisco Javier; Ronda Penacoba, Silvia; Deán-Ben, Xosé Luís.; Montero de Espinosa, Francisco; Razansky, Daniel
2018-02-01
Medium intensity focused ultrasound (MIFU) holds promise in important clinical applications. Generally, the aim in MIFU is to stimulate physiological mechanisms that reinforce healing responses, avoiding reaching temperatures that can cause permanent tissue damage. The outcome of interventions is then strongly affected by the temperature distribution in the treated region, and accurate monitoring represents a significant clinical need. In this work, we showcase the capacities of 4D optoacoustic imaging to monitor tissue heating during MIFU. The proposed method allows localizing the ultrasound focus, estimating the peak temperature and measuring the size of the heat-affected volume. Calibration experiments in a tissue-mimicking phantom demonstrate that the optoacoustically-estimated temperature accurately matches thermocouple readings. The good performance of the suggested approach in real tissues is further showcased in experiments with bovine muscle samples.
The beta Burr type X distribution properties with application.
Merovci, Faton; Khaleel, Mundher Abdullah; Ibrahim, Noor Akma; Shitan, Mahendran
2016-01-01
We develop a new continuous distribution called the beta-Burr type X distribution that extends the Burr type X distribution. The properties provide a comprehensive mathematical treatment of this distribution. Further more, various structural properties of the new distribution are derived, that includes moment generating function and the rth moment thus generalizing some results in the literature. We also obtain expressions for the density, moment generating function and rth moment of the order statistics. We consider the maximum likelihood estimation to estimate the parameters. Additionally, the asymptotic confidence intervals for the parameters are derived from the Fisher information matrix. Finally, simulation study is carried at under varying sample size to assess the performance of this model. Illustration the real dataset indicates that this new distribution can serve as a good alternative model to model positive real data in many areas.
Siafarikas, F; Staer-Jensen, J; Braekken, I H; Bø, K; Engh, M Ellström
2013-03-01
To evaluate the learning process for acquiring three- and four-dimensional (3D/4D) transperineal ultrasound volumes of the levator hiatus (LH) dimensions at rest, during pelvic floor muscle (PFM) contraction and on Valsalva maneuver, and for analyzing the ultrasound volumes, as well as to perform an interobserver reliability study between two independent ultrasound examiners. This was a prospective study including 22 women. We monitored the learning process of an inexperienced examiner (IE) performing 3D/4D transperineal ultrasonography and analyzing the volumes. The examination included acquiring volumes during three PFM contractions and three Valsalva maneuvers. LH dimensions were determined in the axial plane. The learning process was documented by estimating agreement between the IE and an experienced examiner (E) using the intraclass correlation coefficient. Agreement was calculated in blocks of 10 ultrasound examinations and analyzed volumes. After the learning process was complete the interobserver reliability for the technique was calculated between these two independent examiners. For offline analysis of the first 10 ultrasound volumes obtained by E, good to very good agreement between E and IE was achieved for all LH measurements except for the left and right levator-urethra gap and pubic arc. For the next 10 analyzed volumes, agreement improved for all LH measurements. Volumes that had been obtained by IE and E were then re-evaluated by IE, and good to very good agreement was found for all LH measurements indicating consistency in volume acquisition. The interobserver reliability study showed excellent ICC values (ICC, 0.81-0.97) for all LH measurements except the pubic arc (ICC = 0.67). 3D/4D transperineal ultrasound is a reliable technique that can be learned in a short period of time. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Estimation of Electrically-Evoked Knee Torque from Mechanomyography Using Support Vector Regression.
Ibitoye, Morufu Olusola; Hamzaid, Nur Azah; Abdul Wahab, Ahmad Khairi; Hasnan, Nazirah; Olatunji, Sunday Olusanya; Davis, Glen M
2016-07-19
The difficulty of real-time muscle force or joint torque estimation during neuromuscular electrical stimulation (NMES) in physical therapy and exercise science has motivated recent research interest in torque estimation from other muscle characteristics. This study investigated the accuracy of a computational intelligence technique for estimating NMES-evoked knee extension torque based on the Mechanomyographic signals (MMG) of contracting muscles that were recorded from eight healthy males. Simulation of the knee torque was modelled via Support Vector Regression (SVR) due to its good generalization ability in related fields. Inputs to the proposed model were MMG amplitude characteristics, the level of electrical stimulation or contraction intensity, and knee angle. Gaussian kernel function, as well as its optimal parameters were identified with the best performance measure and were applied as the SVR kernel function to build an effective knee torque estimation model. To train and test the model, the data were partitioned into training (70%) and testing (30%) subsets, respectively. The SVR estimation accuracy, based on the coefficient of determination (R²) between the actual and the estimated torque values was up to 94% and 89% during the training and testing cases, with root mean square errors (RMSE) of 9.48 and 12.95, respectively. The knee torque estimations obtained using SVR modelling agreed well with the experimental data from an isokinetic dynamometer. These findings support the realization of a closed-loop NMES system for functional tasks using MMG as the feedback signal source and an SVR algorithm for joint torque estimation.
Is the Veterans Specific Activity Questionnaire Valid to Assess Older Adults Aerobic Fitness?
de Carvalho Bastone, Alessandra; de Souza Moreira, Bruno; Teixeira, Claudine Patrícia; Dias, João Marcos Domingues; Dias, Rosângela Corrêa
2016-01-01
Aerobic fitness in older adults is related to health status, incident disability, nursing home admission, and all-cause mortality. The most accurate quantification of aerobic fitness, expressed as peak oxygen consumption in mL·kg·min, is the cardiorespiratory exercise test; however, it is not feasible in all settings and might offer risk to patients. The Veterans Specific Activity Questionnaire (VSAQ) is a 13-item self-administered symptom questionnaire that estimates aerobic fitness expressed in metabolic equivalents (METs) and has been validated to cardiovascular patients. The purpose of this study was to assess the validity and reliability of the VSAQ in older adults without specific health conditions. A methodological study with a cross-sectional design was conducted with 28 older adults (66-86 years). The VSAQ was administered on 3 occasions by 2 evaluators. Aerobic capacity in METs as measured by the VSAQ was compared with the METs found in an incremental shuttle walk test (ISWT) performed with a portable metabolic measurement system and with accelerometer data. The validity of the VSAQ was found to be moderate-to-good when compared with the METs and distance measured by the ISWT and with the moderate activity per day and steps per day obtained by accelerometry. The Bland-Altman graph analysis showed no values outside the limits of agreement, suggesting good precision between the METs estimated by questionnaire and the METs measured by the ISWT. Also, the intrarater and interrater reliabilities of the instrument were good. The results showed that the VSAQ is a valuable tool to assess the aerobic fitness of older adults.
Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.
2017-01-01
We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404
NASA Astrophysics Data System (ADS)
Schrabback, T.; Erben, T.; Simon, P.; Miralles, J.-M.; Schneider, P.; Heymans, C.; Eifler, T.; Fosbury, R. A. E.; Freudling, W.; Hetterscheidt, M.; Hildebrandt, H.; Pirzkal, N.
2007-06-01
Context: This is the first paper of a series describing our measurement of weak lensing by large-scale structure, also termed “cosmic shear”, using archival observations from the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). Aims: In this work we present results from a pilot study testing the capabilities of the ACS for cosmic shear measurements with early parallel observations and presenting a re-analysis of HST/ACS data from the GEMS survey and the GOODS observations of the Chandra Deep Field South (CDFS). Methods: We describe the data reduction and, in particular, a new correction scheme for the time-dependent ACS point-spread-function (PSF) based on observations of stellar fields. This is currently the only technique which takes the full time variation of the PSF between individual ACS exposures into account. We estimate that our PSF correction scheme reduces the systematic contribution to the shear correlation functions due to PSF distortions to <2 × 10-6 for galaxy fields containing at least 10 stars, which corresponds to ⪉5% of the cosmological signal expected on scales of a single ACS field. Results: We perform a number of diagnostic tests indicating that the remaining level of systematics is consistent with zero for the GEMS and GOODS data confirming the success of our PSF correction scheme. For the parallel data we detect a low level of remaining systematics which we interpret to be caused by a lack of sufficient dithering of the data. Combining the shear estimate of the GEMS and GOODS observations using 96 galaxies arcmin-2 with the photometric redshift catalogue of the GOODS-MUSIC sample, we determine a local single field estimate for the mass power spectrum normalisation σ8, CDFS=0.52+0.11-0.15 (stat) ± 0.07(sys) (68% confidence assuming Gaussian cosmic variance) at a fixed matter density Ω_m=0.3 for a ΛCDM cosmology marginalising over the uncertainty of the Hubble parameter and the redshift distribution. We interpret this exceptionally low estimate to be due to a local under-density of the foreground structures in the CDFS. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archives at the Space Telescope European Coordinating Facility and the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Estimation and uncertainty analysis of dose response in an inter-laboratory experiment
NASA Astrophysics Data System (ADS)
Toman, Blaza; Rösslein, Matthias; Elliott, John T.; Petersen, Elijah J.
2016-02-01
An inter-laboratory experiment for the evaluation of toxic effects of NH2-polystyrene nanoparticles on living human cancer cells was performed with five participating laboratories. Previously published results from nanocytoxicity assays are often contradictory, mostly due to challenges related to producing a reliable cytotoxicity assay protocol for use with nanomaterials. Specific challenges include reproducibility preparing nanoparticle dispersions, biological variability from testing living cell lines, and the potential for nano-related interference effects. In this experiment, such challenges were addressed by developing a detailed experimental protocol and using a specially designed 96-well plate layout which incorporated a range of control measurements to assess multiple factors such as nanomaterial interference, pipetting accuracy, cell seeding density, and instrument performance. Detailed data analysis of these control measurements showed that good control of the experiments was attained by all participants in most cases. The main measurement objective of the study was the estimation of a dose response relationship between concentration of the nanoparticles and metabolic activity of the living cells, under several experimental conditions. The dose curve estimation was achieved by imbedding a three parameter logistic curve in a three level Bayesian hierarchical model, accounting for uncertainty due to all known experimental conditions as well as between laboratory variability in a top-down manner. Computation was performed using Markov Chain Monte Carlo methods. The fit of the model was evaluated using Bayesian posterior predictive probabilities and found to be satisfactory.
Influence of Weld Porosity on the Integrity of Marine Structures
1989-02-01
LEFM provides good estimates of long crack growth, methods developed by Leis [41] could be used to improve the accuracy of fatigue crack propagation...resulted in good estimate for fatigue life and, when viewed in terms of stress, even better estimates. The absolute magnitude of the predictions are...4717 " O t, .4- 10.4--,LO r VI ’A C, CD IDan) m or( i 0 D a-- --- .4- IA ’ .0/ . 0U .. I. L ~ ~ ~ ~ ~ ~ .ODI ’N L .d- L D3 0 "’’. U’ U) L )L " ’A~’f0 A
Direct observations of a flare related coronal and solar wind disturbance
NASA Technical Reports Server (NTRS)
Gosling, J. T.; Hildner, E.; Macqueen, R. M.; Munro, R. H.; Poland, A. I.; Ross, C. L.
1975-01-01
Numerous mass ejections from the sun have been detected with orbiting coronagraphs. Here for the first time we document and discuss the direct association of a coronagraph observed mass ejection, which followed a 2B flare, with a large interplanetary shock wave disturbance observed at 1 AU. Estimates of the mass and energy content of the coronal disturbance are in reasonably good agreement with estimates of the mass and energy content of the solar wind disturbance at 1 AU. The energy estimates as well as the transit time of the disturbance are also in good agreement with numerical models of shock wave propagation in the solar wind.
A wireless sensor network based personnel positioning scheme in coal mines with blind areas.
Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing
2010-01-01
This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures.
SigrafW: An easy-to-use program for fitting enzyme kinetic data.
Leone, Francisco Assis; Baranauskas, José Augusto; Furriel, Rosa Prazeres Melo; Borin, Ivana Aparecida
2005-11-01
SigrafW is Windows-compatible software developed using the Microsoft® Visual Basic Studio program that uses the simplified Hill equation for fitting kinetic data from allosteric and Michaelian enzymes. SigrafW uses a modified Fibonacci search to calculate maximal velocity (V), the Hill coefficient (n), and the enzyme-substrate apparent dissociation constant (K). The estimation of V, K, and the sum of the squares of residuals is performed using a Wilkinson nonlinear regression at any Hill coefficient (n). In contrast to many currently available kinetic analysis programs, SigrafW shows several advantages for the determination of kinetic parameters of both hyperbolic and nonhyperbolic saturation curves. No initial estimates of the kinetic parameters are required, a measure of the goodness-of-the-fit for each calculation performed is provided, the nonlinear regression used for calculations eliminates the statistical bias inherent in linear transformations, and the software can be used for enzyme kinetic simulations either for educational or research purposes. Persons interested in receiving a free copy of the software should contact Dr. F. A. Leone. Copyright © 2005 International Union of Biochemistry and Molecular Biology, Inc.
Subramanian, Swetha; Mast, T Douglas
2015-10-07
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Rigid Body Rate Inference from Attitude Variation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, I. Y.; Harman, Richard R.; Thienel, Julie K.
2006-01-01
In this paper we research the extraction of the angular rate vector from attitude information without differentiation, in particular from quaternion measurements. We show that instead of using a Kalman filter of some kind, it is possible to obtain good rate estimates, suitable for spacecraft attitude control loop damping, using simple feedback loops, thereby eliminating the need for recurrent covariance computation performed when a Kalman filter is used. This considerably simplifies the computations required for rate estimation in gyro-less spacecraft. Some interesting qualities of the Kalman filter gain are explored, proven and utilized. We examine two kinds of feedback loops, one with varying gain that is proportional to the well known Q matrix, which is computed using the measured quaternion, and the other type of feedback loop is one with constant coefficients. The latter type includes two kinds; namely, a proportional feedback loop, and a proportional-integral feedback loop. The various schemes are examined through simulations and their performance is compared. It is shown that all schemes are adequate for extracting the angular velocity at an accuracy suitable for control loop damping.
On the Extraction of Angular Velocity from Attitude Measurements
NASA Technical Reports Server (NTRS)
Bar-Itzhack, I. Y.; Harman, Richard R.; Thienel, Julie K.
2006-01-01
In this paper we research the extraction of the angular rate vector from attitude information without differentiation, in particular from quaternion measurements. We show that instead of using a Kalman filter of some kind, it is possible to obtain good rate estimates, suitable for spacecraft attitude control loop damping, using simple feedback loops, thereby eliminating the need for recurrent covariance computation performed when a Kalman filter is used. This considerably simplifies the computations required for rate estimation in gyro-less spacecraft. Some interesting qualities of the Kalman filter gain are explored, proven and utilized. We examine two kinds of feedback loops, one with varying gain that is proportional to the well known Q matrix, which is computed using the measured quaternion, and the other type of feedback loop is one with constant coefficients. The latter type includes two kinds; namely, a proportional feedback loop, and a proportional-integral feedback loop. The various schemes are examined through simulations and their performance is compared. It is shown that all schemes are adequate for extracting the angular velocity at an accuracy suitable for control loop damping.
Zhou, Miaolei; Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.
The weighted function method: A handy tool for flood frequency analysis or just a curiosity?
NASA Astrophysics Data System (ADS)
Bogdanowicz, Ewa; Kochanek, Krzysztof; Strupczewski, Witold G.
2018-04-01
The idea of the Weighted Function (WF) method for estimation of Pearson type 3 (Pe3) distribution introduced by Ma in 1984 has been revised and successfully applied for shifted inverse Gaussian (IGa3) distribution. Also the conditions of WF applicability to a shifted distribution have been formulated. The accuracy of WF flood quantiles for both Pe3 and IGa3 distributions was assessed by Monte Caro simulations under the true and false distribution assumption versus the maximum likelihood (MLM), moment (MOM) and L-moments (LMM) methods. Three datasets of annual peak flows of Polish catchments serve the case studies to compare the results of the WF, MOM, MLM and LMM performance for the real flood data. For the hundred-year flood the WF method revealed the explicit superiority only over the MLM surpassing the MOM and especially LMM both for the true and false distributional assumption with respect to relative bias and relative mean root square error values. Generally, the WF method performs well and for hydrological sample size and constitutes good alternative for the estimation of the flood upper quantiles.
Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model
Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730
A Wireless Sensor Network Based Personnel Positioning Scheme in Coal Mines with Blind Areas
Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing
2010-01-01
This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures. PMID:22163446
A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.
Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.
1997-03-01
There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.
A Bayesian Assessment of Seismic Semi-Periodicity Forecasts
NASA Astrophysics Data System (ADS)
Nava, F.; Quinteros, C.; Glowacka, E.; Frez, J.
2016-01-01
Among the schemes for earthquake forecasting, the search for semi-periodicity during large earthquakes in a given seismogenic region plays an important role. When considering earthquake forecasts based on semi-periodic sequence identification, the Bayesian formalism is a useful tool for: (1) assessing how well a given earthquake satisfies a previously made forecast; (2) re-evaluating the semi-periodic sequence probability; and (3) testing other prior estimations of the sequence probability. A comparison of Bayesian estimates with updated estimates of semi-periodic sequences that incorporate new data not used in the original estimates shows extremely good agreement, indicating that: (1) the probability that a semi-periodic sequence is not due to chance is an appropriate estimate for the prior sequence probability estimate; and (2) the Bayesian formalism does a very good job of estimating corrected semi-periodicity probabilities, using slightly less data than that used for updated estimates. The Bayesian approach is exemplified explicitly by its application to the Parkfield semi-periodic forecast, and results are given for its application to other forecasts in Japan and Venezuela.
ERIC Educational Resources Information Center
Malcolm, Peter
2013-01-01
The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…
Hover and forward flight acoustics and performance of a small-scale helicopter rotor system
NASA Technical Reports Server (NTRS)
Kitaplioglu, C.; Shinoda, P.
1985-01-01
A 2.1-m diam., 1/6-scale model helicopter main rotor was tested in hover in the test section of the NASA Ames 40- by 80- Foot Wind Tunnel. Subsequently, it was tested in forward flight in the Ames 7- by 10-Foot Wind Tunnel. The primary objective of the tests was to obtain performance and noise data on a small-scale rotor at various thrust coefficients, tip Mach numbers, and, in the later case, various advance ratios, for comparisons with similar existing data on full-scale helicopter rotors. This comparison yielded a preliminary evaluation of the scaling of helicopter rotor performance and acoustic radiation in hover and in forward flight. Correlation between model-scale and full-scale performance and acoustics was quite good in hover. In forward flight, however, there were significant differences in both performance and acoustic characteristics. A secondary objective was to contribute to a data base that will permit the estimation of facility effects on acoustic testing.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng
2018-05-31
For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Evaluation of seven rapid tests for syphilis available in Brazil using defibrinated plasma panels.
Bazzo, Maria Luiza; da Motta, Leonardo Rapone; Rudolf-Oliveira, Renata Cristina Messores; Bigolin, Alisson; Golfetto, Lisléia; Mesquita, Fábio; Benzaken, Adele Schwartz; Gaspar, Pamela Cristina; Pires, Ana Flavia Nacif P Coelho; Ferreira Júnior, Orlando da Costa; Franchini, Miriam
2017-12-01
In 2012, the WHO estimated that 6 million new cases of syphilis per year would occur worldwide, including 937 000 in Brazil. Early diagnosis and treatment of syphilis are essential to reduce morbidity and prevent transmission. The availability of rapid tests (RTs) for this diagnosis means that testing can be performed more quickly, as a point-of-care test, even in non-laboratory environments and requires only simple technical training to antibodies detection. The objective of this study was to evaluate the performance and operational aspects of seven commercially available RTs for syphilis in Brazil. Seven rapid treponemal tests were evaluated for sensitivity, specificity, accuracy and Kappa value, according to a panel composed of 493 members. The operational performance of the assay was also determined for these tests. The seven RTs showed sensitivity ranging from 94.5% to 100% when compared with the reference tests and specificity of between 91.5% and 100%. All the RTs evaluated presented good operational performance, and only one failed to present the minimum specificity as defined by Brazil's Ministry of Health. All the tests presented good operational performance, and the professionals who performed them considered them to be easy to use and interpret. This evaluation is important for making informed choices of tests to be used in the Brazilian Unified Health System. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Lera, Lydia; Ángel, Bárbara; Sánchez, Hugo; Picrin, Yaisy; Hormazabal, María José; Quiero, Andrea; Albala, Cecilia
2014-09-28
To estimate and validate cut-off points of skeletal muscle mass index (SMI) in Chilean population, for using in an algorithm for a diagnosis of sarcopenia developed by European Working Group on Sarcopenia in Older People (EWGSOP). Secondary analysis of Cross-sectional data in 440 Chilean older subjects to estimate cut-off points of SMI determined by DEXA and predicted by an anthropometric equation. Afterward a cross-sectional validation in a sample of 164 older people was performed. Anthropometric measures, self-reported health status, physical performance tests and DEXA were carried out. Decreased muscle strength was defined as handgrip strength <15 kg in women and <27 kg in male. Cut-off points of SMI were defined as values under 20th percentile for DEXA measures and estimated through ROC curves for the anthropometric model. Biological validity of the algorithm was tested by contrasting the diagnosis with physical performance tests and functionality. Cut-off points of SMI obtained by DEXA were 7.19 kg/m² in men and 5.77 kg/m² in women and 7.45 kg/ m² and 5.88 kg/m², respectively for the predicted by the model. Sensibility and specificity of estimations vs DEXA measures were 80% and 92% in men and 77% and 89% in women. We obtained cut-off points of SMI for DEXA and for a prediction equation for older adults Chilean, with good sensibility and specificity for the measurement by DEXA. It will allow to apply the EWGSOP algorithm to the early diagnosis of sarcopenia and to develop programs for prevention, delay or reversion this syndrome. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon
NASA Astrophysics Data System (ADS)
Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.
2011-12-01
We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at Óbidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest errors are also located in drainage areas outside Brazil, mostly Japurá River, and in the lower Amazon River. In the latter, correlation with observations is high but the model underestimates the amplitude of water levels. We also found a large bias between model and ENVISAT water levels, ranging from -3 to -15 m. The model provided TWS in good accordance with GRACE estimates. ENS values for TWS over the whole Amazon equals 0.93. We also analyzed results in 21 sub-regions of 4 x 4°. ENS is smaller than 0.8 only in 5 areas, and these are found mostly in the northwest part of the Amazon, possibly due to same errors reported in discharge results. Flood extent validation is under development, but a previous analysis in Brazilian part of Solimões River basin suggests a good model performance. The authors are grateful for the financial and operational support from the brazilian agencies FINEP, CNPq and ANA and from the french observatories HYBAM and SOERE RBV.
Ultrasound elastic tensor imaging: comparison with MR diffusion tensor imaging in the myocardium
NASA Astrophysics Data System (ADS)
Lee, Wei-Ning; Larrat, Benoît; Pernot, Mathieu; Tanter, Mickaël
2012-08-01
We have previously proven the feasibility of ultrasound-based shear wave imaging (SWI) to non-invasively characterize myocardial fiber orientation in both in vitro porcine and in vivo ovine hearts. The SWI-estimated results were in good correlation with histology. In this study, we proposed a new and robust fiber angle estimation method through a tensor-based approach for SWI, coined together as elastic tensor imaging (ETI), and compared it with magnetic resonance diffusion tensor imaging (DTI), a current gold standard and extensively reported non-invasive imaging technique for mapping fiber architecture. Fresh porcine (n = 5) and ovine (n = 5) myocardial samples (20 × 20 × 30 mm3) were studied. ETI was firstly performed to generate shear waves and to acquire the wave events at ultrafast frame rate (8000 fps). A 2.8 MHz phased array probe (pitch = 0.28 mm), connected to a prototype ultrasound scanner, was mounted on a customized MRI-compatible rotation device, which allowed both the rotation of the probe from -90° to 90° at 5° increments and co-registration between two imaging modalities. Transmural shear wave speed at all propagation directions realized was firstly estimated. The fiber angles were determined from the shear wave speed map using the least-squares method and eigen decomposition. The test myocardial sample together with the rotation device was then placed inside a 7T MRI scanner. Diffusion was encoded in six directions. A total of 270 diffusion-weighted images (b = 1000 s mm-2, FOV = 30 mm, matrix size = 60 × 64, TR = 6 s, TE = 19 ms, 24 averages) and 45 B0 images were acquired in 14 h 30 min. The fiber structure was analyzed by the fiber-tracking module in software, MedINRIA. The fiber orientation in the overlapped myocardial region which both ETI and DTI accessed was therefore compared, thanks to the co-registered imaging system. Results from all ten samples showed good correlation (r2 = 0.81, p < 0.0001) and good agreement (3.05° bias) between ETI and DTI fiber angle estimates. The average ETI-estimated fractional anisotropy (FA) values decreased from subendocardium to subepicardium (p < 0.05, unpaired, one-tailed t-test, N = 10) by 33%, whereas the corresponding DTI-estimated FA values presented a change of -10% (p > 0.05, unpaired, one-tailed t-test, N = 10). In conclusion, we have demonstrated that the fiber orientation estimated by ETI, which assesses the shear wave speed (and thus the stiffness), was comparable to that measured by DTI, which evaluates the preferred direction of water diffusion, and have validated this concept within the myocardium. Moreover, ETI was shown capable of mapping the transmural fiber angles with as few as seven shear wave propagation directions.
Speed Sensorless Induction Motor Drives for Electrical Actuators: Schemes, Trends and Tradeoffs
NASA Technical Reports Server (NTRS)
Elbuluk, Malik E.; Kankam, M. David
1997-01-01
For a decade, induction motor drive-based electrical actuators have been under investigation as potential replacement for the conventional hydraulic and pneumatic actuators in aircraft. Advantages of electric actuator include lower weight and size, reduced maintenance and operating costs, improved safety due to the elimination of hazardous fluids and high pressure hydraulic and pneumatic actuators, and increased efficiency. Recently, the emphasis of research on induction motor drives has been on sensorless vector control which eliminates flux and speed sensors mounted on the motor. Also, the development of effective speed and flux estimators has allowed good rotor flux-oriented (RFO) performance at all speeds except those close to zero. Sensorless control has improved the motor performance, compared to the Volts/Hertz (or constant flux) controls. This report evaluates documented schemes for speed sensorless drives, and discusses the trends and tradeoffs involved in selecting a particular scheme. These schemes combine the attributes of the direct and indirect field-oriented control (FOC) or use model adaptive reference systems (MRAS) with a speed-dependent current model for flux estimation which tracks the voltage model-based flux estimator. Many factors are important in comparing the effectiveness of a speed sensorless scheme. Among them are the wide speed range capability, motor parameter insensitivity and noise reduction. Although a number of schemes have been proposed for solving the speed estimation, zero-speed FOC with robustness against parameter variations still remains an area of research for speed sensorless control.
Satheesha, T. Y.; Prasad, M. N. Giri; Dhruve, Kashyap D.
2017-01-01
Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images. PMID:28512610
NASA Astrophysics Data System (ADS)
Gill, G.; Sakrani, T.; Cheng, W.; Zhou, J.
2017-09-01
Many studies have utilized the spatial correlations among traffic crash data to develop crash prediction models with the aim to investigate the influential factors or predict crash counts at different sites. The spatial correlation have been observed to account for heterogeneity in different forms of weight matrices which improves the estimation performance of models. But very rarely have the weight matrices been compared for the prediction accuracy for estimation of crash counts. This study was targeted at the comparison of two different approaches for modelling the spatial correlations among crash data at macro-level (County). Multivariate Full Bayesian crash prediction models were developed using Decay-50 (distance-based) and Queen-1 (adjacency-based) weight matrices for simultaneous estimation crash counts of four different modes: vehicle, motorcycle, bike, and pedestrian. The goodness-of-fit and different criteria for accuracy at prediction of crash count reveled the superiority of Decay-50 over Queen-1. Decay-50 was essentially different from Queen-1 with the selection of neighbors and more robust spatial weight structure which rendered the flexibility to accommodate the spatially correlated crash data. The consistently better performance of Decay-50 at prediction accuracy further bolstered its superiority. Although the data collection efforts to gather centroid distance among counties for Decay-50 may appear to be a downside, but the model has a significant edge to fit the crash data without losing the simplicity of computation of estimated crash count.