NASA Astrophysics Data System (ADS)
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
Method for estimating the morphological significance of simple forms of crystals from X-ray data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treivus, E. B., E-mail: sbobr1@bk.ru
2010-09-15
When developing V.I. Mikheev and I.I. Shafranovskii's method for estimating the morphological significance of faces of different simple forms from X-ray reflection intensities, a way to approximately evaluate the morphological significance of simple forms on crystals from the structure amplitudes of the corresponding atomic planes is proposed. The potential for this approach is demonstrated by the examples of marcasite and zircon.
NASA Technical Reports Server (NTRS)
Campbell, John P; Mckinney, Marion O
1952-01-01
A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.
Simplified Life-Cycle Cost Estimation
NASA Technical Reports Server (NTRS)
Remer, D. S.; Lorden, G.; Eisenberger, I.
1983-01-01
Simple method for life-cycle cost (LCC) estimation avoids pitfalls inherent in formulations requiring separate estimates of inflation and interest rates. Method depends for validity observation that interest and inflation rates closely track each other.
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
Bootstrap Methods: A Very Leisurely Look.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Winstead, Wayland H.
The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…
Evaporation estimation of rift valley lakes: comparison of models.
Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe
2009-01-01
Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.
2016-01-01
Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.
Tracking of electrochemical impedance of batteries
NASA Astrophysics Data System (ADS)
Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.
2016-04-01
This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.
Methods for estimating 2D cloud size distributions from 1D observations
Romps, David M.; Vogelmann, Andrew M.
2017-08-04
The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less
Methods for estimating 2D cloud size distributions from 1D observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romps, David M.; Vogelmann, Andrew M.
The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less
Estimating Measures of Pass-Fail Reliability from Parallel Half-Tests.
ERIC Educational Resources Information Center
Woodruff, David J.; Sawyer, Richard L.
Two methods for estimating measures of pass-fail reliability are derived, by which both theta and kappa may be estimated from a single test administration. The methods require only a single test administration and are computationally simple. Both are based on the Spearman-Brown formula for estimating stepped-up reliability. The non-distributional…
A simple method for estimating frequency response corrections for eddy covariance systems
W. J. Massman
2000-01-01
A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
Gbotosho, Grace Olusola; Okuboyejo, Titilope; Happi, Christian Tientcha; Sowunmi, Akintunde
2014-01-01
A simple method to estimate antimalarial drug-related fall in hematocrit (FIH) after treatment of Plasmodium falciparum infections in the field is described. The method involves numeric estimation of the relative difference in hematocrit at baseline (pretreatment) and the first 1 or 2 days after treatment begun as numerator and the corresponding relative difference in parasitemia as the denominator, and expressing it per 1000 parasites cleared from peripheral blood. Using the method showed that FIH/1000 parasites cleared from peripheral blood (cpb) at 24 or 48 hours were similar in artemether-lumefantrine and artesunate-amodiaquine-treated children (0.09; 95% confidence interval, 0.052-0.138 vs 0.10; 95% confidence interval, 0.069-0.139%; P = 0.75) FIH/1000 parasites cpb in patients with higher parasitemias were significantly (P < 0.0001) and five- to 10-fold lower than in patients with lower parasitemias suggesting conservation of hematocrit or red cells in patients with higher parasitemias treated with artesunate-amodiaquine or artemether-lumefantrine. FIH/1000 parasites cpb were similar in anemic and nonanemic children. Estimation of FIH/1000 parasites cpb is simple, allows estimation of relatively conserved hematocrit during treatment, and can be used in both observational studies and clinical trials involving antimalarial drugs.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement
NASA Astrophysics Data System (ADS)
Sun, Hui; Liu, Ji-Gou
2018-07-01
This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.
Simple Form of MMSE Estimator for Super-Gaussian Prior Densities
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-04-01
The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.
Watson, K.; Hummer-Miller, S.
1981-01-01
A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.
A Simple Estimation Method for Aggregate Government Outsourcing
ERIC Educational Resources Information Center
Minicucci, Stephen; Donahue, John D.
2004-01-01
The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…
A simple finite element method for linear hyperbolic problems
Mu, Lin; Ye, Xiu
2017-09-14
Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.
A simple finite element method for linear hyperbolic problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.
NASA Astrophysics Data System (ADS)
Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.
2014-11-01
We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Identification of site frequencies from building records
Celebi, M.
2003-01-01
A simple procedure to identify site frequencies using earthquake response records from roofs and basements of buildings is presented. For this purpose, data from five different buildings are analyzed using only spectral analyses techniques. Additional data such as free-field records in close proximity to the buildings and site characterization data are also used to estimate site frequencies and thereby to provide convincing evidence and confirmation of the site frequencies inferred from the building records. Furthermore, simple code-formula is used to calculate site frequencies and compare them with the identified site frequencies from records. Results show that the simple procedure is effective in identification of site frequencies and provides relatively reliable estimates of site frequencies when compared with other methods. Therefore the simple procedure for estimating site frequencies using earthquake records can be useful in adding to the database of site frequencies. Such databases can be used to better estimate site frequencies of those sites with similar geological structures.
Hemanth Kumar, A K; Sudha, V; Swaminathan, Soumya; Ramachandran, Geetha
2010-10-01
Simple and reliable methods to estimate drugs in pharmaceutical products are needed. In most cases, antiretroviral drug estimations are performed using a HPLC method, requiring expensive equipment and trained technicians. A relatively simple and accurate method to estimate antiretroviral drugs in pharmaceutical preparations is by spectrophotometric method, which is cheap and simple to use as compared to HPLC. We undertook this study to standardise methods for estimation of nevirapine (NVP), lamivudine (3TC) and stavudine (d4T) in single tablets/capsules by HPLC and spectrophotometry and to compare the content of these drugs determined by both these methods. Twenty tablets/capsules of NVP, 3TC and d4T each were analysed for their drug content by HPLC and spectrophotometric methods. Suitably diluted drug solutions were run on HPLC fitted with a C18 column using UV detection at ambient temperature. The absorbance of the diluted drug solutions were read in a spectrophotometer at 300, 285 and 270 nm for NVP, 3TC and d4T respectively. Pure powders of the drugs were used to prepare calibration standards of known drug concentrations, which was set up with each assay. The inter-day variation (%) of standards for NVP, 3TC and d4T ranged from 2.5 to 6.7, 2.1 to 7.7 and 6.2 to 7.7, respectively by HPLC. The corresponding values by spectrophotometric method were 2.7 to 4.7, 4.2 to 7.2 and 3.8 to 6.0. The per cent variation between the HPLC and spectrophotometric methods ranged from 0.45 to 4.49 per cent, 0 to 4.98 per cent and 0.35 to 8.73 per cent for NVP, 3TC and d4T,respectively. The contents of NVP, 3TC and d4T in the tablets estimated by HPLC and spectrophotometric methods were similar, and the variation in the amount of these drugs estimated by HPLC and spectrophotometric methods was below 10 per cent. This suggests that the spectrophotometric method is as accurate as the HPLC method for estimation of NVP, 3TC and d4T in tablet/capsule. Hence laboratories that do not have HPLC equipment can also undertake these drug estimations using spectrophotometer.
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
Estimation of descriptive statistics for multiply censored water quality data
Helsel, Dennis R.; Cohn, Timothy A.
1988-01-01
This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.
Piecewise SALT sampling for estimating suspended sediment yields
Robert B. Thomas
1989-01-01
A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...
Sampling estimators of total mill receipts for use in timber product output studies
John P. Brown; Richard G. Oderwald
2012-01-01
Data from the 2001 timber product output study for Georgia was explored to determine new methods for stratifying mills and finding suitable sampling estimators. Estimators for roundwood receipts totals comprised several types: simple random sample, ratio, stratified sample, and combined ratio. Two stratification methods were examined: the Dalenius-Hodges (DH) square...
NASA Astrophysics Data System (ADS)
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
NASA Astrophysics Data System (ADS)
Chris, Leong; Yoshiyuki, Yokoo
2017-04-01
Islands that are concentrated in developing countries have poor hydrological research data which contribute to stress on hydrological resources due to unmonitored human influence and negligence. As studies in islands are relatively young, there is a need to understand these stresses and influences by building block research specifically targeting islands. The flow duration curve (FDC) is a simple start up hydrological tool that can be used in initial studies of islands. This study disaggregates the FDC into three sections, top, middle and bottom and in each section runoff is estimated with simple hydrological models. The study is based on Hawaiian Islands, toward estimating runoff in ungauged island catchments in the humid tropics. Runoff estimations in the top and middle sections include using the Curve Number (CN) method and the Regime Curve (RC) respectively. The bottom section is presented as a separate study from this one. The results showed that for majority of the catchments the RC can be used for estimations in the middle section of the FDC. It also showed that in order for the CN method to make stable estimations, it had to be calibrated. This study identifies simple methodologies that can be useful for making runoff estimations in ungauged island catchments.
Estimation of postmortem interval through albumin in CSF by simple dye binding method.
Parmar, Ankita K; Menon, Shobhana K
2015-12-01
Estimation of postmortem interval is a very important question in some medicolegal investigations. For the precise estimation of postmortem interval, there is a need of a method which can give accurate estimation. Bromocresol green (BCG) is a simple dye binding method and widely used in routine practice. Application of this method in forensic practice may bring revolutionary changes. In this study, cerebrospinal fluid was aspirated from cisternal puncture from 100 autopsies. A study was carried out on concentration of albumin with respect to postmortem interval. After death, albumin present in CSF undergoes changes, after 72 h of death, concentration of albumin has become 0.012 mM, and this decrease was linear from 2 h to 72 h. An important relationship was found between albumin concentration and postmortem interval with an error of ± 1-4h. The study concludes that CSF albumin can be a useful and significant parameter in estimation of postmortem interval. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Meng-Hua; Liu, Liang-Gang; You, Zhong; Xu, Ao-Ao
2009-03-01
In this paper, a heuristic approach based on Slavic's peak searching method has been employed to estimate the width of peak regions for background removing. Synthetic and experimental data are used to test this method. With the estimated peak regions using the proposed method in the whole spectrum, we find it is simple and effective enough to be used together with the Statistics-sensitive Nonlinear Iterative Peak-Clipping method.
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
Coeli M. Hoover; Mark J. Ducey; R. Andy Colter; Mariko Yamasaki
2018-01-01
There is growing interest in estimating and mapping biomass and carbon content of forests across large landscapes. LiDAR-based inventory methods are increasingly common and have been successfully implemented in multiple forest types. Asner et al. (2011) developed a simple universal forest carbon estimation method for tropical forests that reduces the amount of required...
VET Program Completion Rates: An Evaluation of the Current Method. Occasional Paper
ERIC Educational Resources Information Center
National Centre for Vocational Education Research (NCVER), 2016
2016-01-01
This work asks one simple question: "how reliable is the method used by the National Centre for Vocational Education Research (NCVER) to estimate projected rates of VET program completion?" In other words, how well do early projections align with actual completion rates some years later? Completion rates are simple to calculate with a…
Use of eddy-covariance methods to "calibrate" simple estimators of evapotranspiration
Sumner, David M.; Geurink, Jeffrey S.; Swancar, Amy
2017-01-01
Direct measurement of actual evapotranspiration (ET) provides quantification of this large component of the hydrologic budget, but typically requires long periods of record and large instrumentation and labor costs. Simple surrogate methods of estimating ET, if “calibrated†to direct measurements of ET, provide a reliable means to quantify ET. Eddy-covariance measurements of ET were made for 12 years (2004-2015) at an unimproved bahiagrass (Paspalum notatum) pasture in Florida. These measurements were compared to annual rainfall derived from rain gage data and monthly potential ET (PET) obtained from a long-term (since 1995) U.S. Geological Survey (USGS) statewide, 2-kilometer, daily PET product. The annual proportion of ET to rainfall indicates a strong correlation (r2=0.86) to annual rainfall; the ratio increases linearly with decreasing rainfall. Monthly ET rates correlated closely (r2=0.84) to the USGS PET product. The results indicate that simple surrogate methods of estimating actual ET show positive potential in the humid Florida climate given the ready availability of historical rainfall and PET.
Age estimation standards for a Western Australian population using the coronal pulp cavity index.
Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel
2013-09-10
Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
NASA Technical Reports Server (NTRS)
Emrich, Bill
2006-01-01
A simple method of estimating vehicle parameters appropriate for interplanetary travel can provide a useful tool for evaluating the suitability of particular propulsion systems to various space missions. Although detailed mission analyses for interplanetary travel can be quite complex, it is possible to derive hirly simple correlations which will provide reasonable trip time estimates to the planets. In the present work, it is assumed that a constant thrust propulsion system propels a spacecraft on a round trip mission having equidistant outbound and inbound legs in which the spacecraft accelerates during the first portion of each leg of the journey and decelerates during the last portion of each leg of the journey. Comparisons are made with numerical calculations from low thrust trajectory codes to estimate the range of applicability of the simplified correlations.
NASA Astrophysics Data System (ADS)
Magdy, Nancy; Ayad, Miriam F.
2015-02-01
Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.
Simple models for estimating local removals of timber in the northeast
David N. Larsen; David A. Gansner
1975-01-01
Provides a practical method of estimating subregional removals of timber and demonstrates its application to a typical problem. Stepwise multiple regression analysis is used to develop equations for estimating removals of softwood, hardwood, and all timber from selected characteristics of socioeconomic structure.
A Simple Geometric Method of Estimating the Error in Using Vieta's Product for [pi
ERIC Educational Resources Information Center
Osler, T. J.
2007-01-01
Vieta's famous product using factors that are nested radicals is the oldest infinite product as well as the first non-iterative method for finding [pi]. In this paper a simple geometric construction intimately related to this product is described. The construction provides the same approximations to [pi] as are given by partial products from…
ERIC Educational Resources Information Center
Martin, Michael J.
2004-01-01
With new and inexpensive computer-based methods, measuring the speed of light and the Earth's radius--historically difficult endeavors--can be simple enough to be tackled by high school and college students working in labs that have limited budgets. In this article, the author describes two methods of estimating the Earth's radius using two…
Estimating Stability Class in the Field
Leonidas G. Lavdas
1997-01-01
A simple and easily remembered method is described for estimating cloud ceiling height in the field. Estimating ceiling height provides the means to estimate stability class, a parameter used to help determine Dispersion Index and Low Visibility Occurrence Risk Index, indices used as smoke management aids. Stability class is also used as an input to VSMOKE, an...
Objective estimates based on experimental data and initial and final knowledge
NASA Technical Reports Server (NTRS)
Rosenbaum, B. M.
1972-01-01
An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
A rapid method for estimating polychlorinated biphenyl (PCB) concentrations in contaminated soils and sediments has been developed by coupling static subcritical water extraction with solid-phase microextraction (SPME). Soil, water, and internal standards are placed in a seale...
A simple method for estimating the size of nuclei on fractal surfaces
NASA Astrophysics Data System (ADS)
Zeng, Qiang
2017-10-01
Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.
Bhalla, Kavi; Harrison, James E
2016-04-01
Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
Qualitative methods in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdal, A.B.
The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less
Marques, João P; Gener, Isabelle; Ayrault, Philippe; Lopes, José M; Ribeiro, F Ramôa; Guisnet, Michel
2004-10-21
A simple method based on the characterization (composition, Bronsted and Lewis acidities) of acid treated HBEA zeolites was developed for estimating the concentrations of framework, extraframework and defect Al species.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
Estimation of critical behavior from the density of states in classical statistical models
NASA Astrophysics Data System (ADS)
Malakis, A.; Peratzakis, A.; Fytas, N. G.
2004-12-01
We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
A Short Tutorial on Inertial Navigation System and Global Positioning System Integration
NASA Technical Reports Server (NTRS)
Smalling, Kyle M.; Eure, Kenneth W.
2015-01-01
The purpose of this document is to describe a simple method of integrating Inertial Navigation System (INS) information with Global Positioning System (GPS) information for an improved estimate of vehicle attitude and position. A simple two dimensional (2D) case is considered. The attitude estimates are derived from sensor data and used in the estimation of vehicle position and velocity through dead reckoning within the INS. The INS estimates are updated with GPS estimates using a Kalman filter. This tutorial is intended for the novice user with a focus on bringing the reader from raw sensor measurements to an integrated position and attitude estimate. An application is given using a remotely controlled ground vehicle operating in assumed 2D environment. The theory is developed first followed by an illustrative example.
NASA Astrophysics Data System (ADS)
Butler, S. L.
2017-12-01
The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.
Mixed Estimation for a Forest Survey Sample Design
Francis A. Roesch
1999-01-01
Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...
A SIMPLE, EFFICIENT SOLUTION OF FLUX-PROFILE RELATIONSHIPS IN THE ATMOSPHERIC SURFACE LAYER
This note describes a simple scheme for analytical estimation of the surface layer similarity functions from state variables. What distinguishes this note from the many previous papers on this topic is that this method is specifically targeted for numerical models where simplici...
A simple method to predict regional fish abundance: an example in the McKenzie River Basin, Oregon
D.J. McGarvey; J.M. Johnston
2011-01-01
Regional assessments of fisheries resources are increasingly called for, but tools with which to perform them are limited. We present a simple method that can be used to estimate regional carrying capacity and apply it to the McKenzie River Basin, Oregon. First, we use a macroecological model to predict trout densities within small, medium, and large streams in the...
Building Simple Hidden Markov Models. Classroom Notes
ERIC Educational Resources Information Center
Ching, Wai-Ki; Ng, Michael K.
2004-01-01
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
Safety in the Chemical Laboratory: Laboratory Air Quality: Part I. A Concentration Model.
ERIC Educational Resources Information Center
Butcher, Samuel S.; And Others
1985-01-01
Offers a simple model for estimating vapor concentrations in instructional laboratories. Three methods are described for measuring ventilation rates, and the results of measurements in six laboratories are presented. The model should provide a simple screening tool for evaluating worst-case personal exposures. (JN)
Estimating forestland area change from inventory data
Paul Van Deusen; Francis Roesch; Thomas Wigley
2013-01-01
Simple methods for estimating the proportion of land changing from forest to nonforest are developed. Variance estimators are derived to facilitate significance tests. A power analysis indicates that 400 inventory plots are required to reliably detect small changes in net or gross forest loss. This is an important result because forest certification programs may...
A Simple Method to Estimate Photosynthetic Radiation Use Efficiency of Canopies
ROSATI, A.; METCALF, S. G.; LAMPINEN, B. D.
2004-01-01
• Background and Aims Photosynthetic radiation use efficiency (PhRUE) over the course of a day has been shown to be constant for leaves throughout a general canopy where nitrogen content (and thus photosynthetic properties) of leaves is distributed in relation to the light gradient. It has been suggested that this daily PhRUE can be calculated simply from the photosynthetic properties of a leaf at the top of the canopy and from the PAR incident on the canopy, which can be obtained from weather‐station data. The objective of this study was to investigate whether this simple method allows estimation of PhRUE of different crops and with different daily incident PAR, and also during the growing season. • Methods The PhRUE calculated with this simple method was compared with that calculated with a more detailed model, for different days in May, June and July in California, on almond (Prunus dulcis) and walnut (Juglans regia) trees. Daily net photosynthesis of 50 individual leaves was calculated as the daylight integral of the instantaneous photosynthesis. The latter was estimated for each leaf from its photosynthetic response to PAR and from the PAR incident on the leaf during the day. • Key Results Daily photosynthesis of individual leaves of both species was linearly related to the daily PAR incident on the leaves (which implies constant PhRUE throughout the canopy), but the slope (i.e. the PhRUE) differed between the species, over the growing season due to changes in photosynthetic properties of the leaves, and with differences in daily incident PAR. When PhRUE was estimated from the photosynthetic light response curve of a leaf at the top of the canopy and from the incident radiation above the canopy, obtained from weather‐station data, the values were within 5 % of those calculated with the more detailed model, except in five out of 34 cases. • Conclusions The simple method of estimating PhRUE is valuable as it simplifies calculation of canopy photosynthesis to a multiplication between the PAR intercepted by the canopy, which can be obtained with remote sensing, and the PhRUE calculated from incident PAR, obtained from standard weather‐station data, and from the photosynthetic properties of leaves at the top of the canopy. The latter properties are the sole crop parameters needed. While being simple, this method describes the differences in PhRUE related to crop, season, nutrient status and daily incident PAR. PMID:15044212
Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H
2017-09-01
Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.
Tillman, F.D.; Callegary, J.B.; Nagler, P.L.; Glenn, E.P.
2012-01-01
Groundwater is a vital water resource in the arid to semi-arid southwestern United States. Accurate accounting of inflows to and outflows from the groundwater system is necessary to effectively manage this shared resource, including the important outflow component of groundwater discharge by vegetation. A simple method for estimating basin-scale groundwater discharge by vegetation is presented that uses remote sensing data from satellites, geographic information systems (GIS) land cover and stream location information, and a regression equation developed within the Southern Arizona study area relating the Enhanced Vegetation Index from the MODIS sensors on the Terra satellite to measured evapotranspiration. Results computed for 16-day composited satellite passes over the study area during the 2000 through 2007 time period demonstrate a sinusoidal pattern of annual groundwater discharge by vegetation with median values ranging from around 0.3 mm per day in the cooler winter months to around 1.5 mm per day during summer. Maximum estimated annual volume of groundwater discharge by vegetation was between 1.4 and 1.9 billion m3 per year with an annual average of 1.6 billion m3. A simplified accounting of the contribution of precipitation to vegetation greenness was developed whereby monthly precipitation data were subtracted from computed vegetation discharge values, resulting in estimates of minimum groundwater discharge by vegetation. Basin-scale estimates of minimum and maximum groundwater discharge by vegetation produced by this simple method are useful bounding values for groundwater budgets and groundwater flow models, and the method may be applicable to other areas with similar vegetation types.
Light intensity related to stand density in mature stands of the western white pine type
C. A. Wellner
1948-01-01
Where tolerance of forest trees or subordinate vegetation is a factor in management, the forester needs a simple field method of Estimating or forecasting light intensities in forest stands. The following article describes a method developed for estimating light intensity beneath the canopy in western white pine forests which may have application in other types.
Experimental Estimating Deflection of a Simple Beam Bridge Model Using Grating Eddy Current Sensors
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring. PMID:23112583
Experimental estimating deflection of a simple beam bridge model using grating eddy current sensors.
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
A simple method for processing data with least square method
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning
2017-08-01
The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.
NASA Astrophysics Data System (ADS)
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
NASA Astrophysics Data System (ADS)
Chopra, Shruti; Motwani, Sanjay K.; Ahmad, Farhan J.; Khar, Roop K.
2007-11-01
Simple, accurate, reproducible, selective, sensitive and cost effective UV-spectrophotometric methods were developed and validated for the estimation of trigonelline in bulk and pharmaceutical formulations. Trigonelline was estimated at 265 nm in deionised water and at 264 nm in phosphate buffer (pH 4.5). Beer's law was obeyed in the concentration ranges of 1-20 μg mL -1 ( r2 = 0.9999) in deionised water and 1-24 μg mL -1 ( r2 = 0.9999) in the phosphate buffer medium. The apparent molar absorptivity and Sandell's sensitivity coefficient were found to be 4.04 × 10 3 L mol -1 cm -1 and 0.0422 μg cm -2/0.001A in deionised water; and 3.05 × 10 3 L mol -1 cm -1 and 0.0567 μg cm -2/0.001A in phosphate buffer media, respectively. These methods were tested and validated for various parameters according to ICH guidelines. The detection and quantitation limits were found to be 0.12 and 0.37 μg mL -1 in deionised water and 0.13 and 0.40 μg mL -1 in phosphate buffer medium, respectively. The proposed methods were successfully applied for the determination of trigonelline in pharmaceutical formulations (vaginal tablets and bioadhesive vaginal gels). The results demonstrated that the procedure is accurate, precise, specific and reproducible (percent relative standard deviation <2%), while being simple and less time consuming and hence can be suitably applied for the estimation of trigonelline in different dosage forms and dissolution studies.
An Inexpensive and Simple Method to Demonstrate Soil Water and Nutrient Flow
ERIC Educational Resources Information Center
Nichols, K. A.; Samson-Liebig, S.
2011-01-01
Soil quality, soil health, and soil sustainability are concepts that are being widely used but are difficult to define and illustrate, especially to a non-technical audience. The objectives of this manuscript were to develop simple and inexpensive methodologies to both qualitatively and quantitatively estimate water infiltration rates (IR),…
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki
2010-12-01
To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P < 0.001) regardless of the degree of hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.
ERIC Educational Resources Information Center
Hart, Vincent G.
1981-01-01
Two examples are given of ways traffic engineers estimate traffic flow. The first, Floating Car Method, involves some basic ideas and the notion of relative velocity. The second, Maximum Traffic Flow, is viewed to involve simple applications of calculus. The material provides insight into specialized applications of mathematics. (MP)
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
INFERRING THE ECCENTRICITY DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed
2010-12-20
Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less
Data and methodological problems in establishing state gasoline-conservation targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, D.L.; Walton, G.H.
The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less
Estimation of S/G ratio in woods using 1064 nm FT-Raman spectroscopy
Umesh P. Agarwal; Sally A. Ralph; Dharshana Padmakshan; Sarah Liu; Steven D. Karlen; Cliff Foster; John Ralph
2015-01-01
Two simple methods based on the 370 cm-1 Raman band intensity were developed for estimation of syringyl-to-guaiacyl (S/G) ratio in woods. The methods, in principle, are representative of the whole cell wall lignin and not just the portion of lignin that gets cleaved to release monomers, for example, during certain S/G chemical analyses. As such,...
A robust measure of HIV-1 population turnover within chronically infected individuals.
Achaz, G; Palmer, S; Kearney, M; Maldarelli, F; Mellors, J W; Coffin, J M; Wakeley, J
2004-10-01
A simple nonparameteric test for population structure was applied to temporally spaced samples of HIV-1 sequences from the gag-pol region within two chronically infected individuals. The results show that temporal structure can be detected for samples separated by about 22 months or more. The performance of the method, which was originally proposed to detect geographic structure, was tested for temporally spaced samples using neutral coalescent simulations. Simulations showed that the method is robust to variation in samples sizes and mutation rates, to the presence/absence of recombination, and that the power to detect temporal structure is high. By comparing levels of temporal structure in simulations to the levels observed in real data, we estimate the effective intra-individual population size of HIV-1 to be between 10(3) and 10(4) viruses, which is in agreement with some previous estimates. Using this estimate and a simple measure of sequence diversity, we estimate an effective neutral mutation rate of about 5 x 10(-6) per site per generation in the gag-pol region. The definition and interpretation of estimates of such "effective" population parameters are discussed.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V
2015-09-01
Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
On the tsunami wave-submerged breakwater interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filianoti, P.; Piscopo, R.
The tsunami wave loads on a submerged rigid breakwater are inertial. It is the result arising from the simple calculation method here proposed, and it is confirmed by the comparison with results obtained by other researchers. The method is based on the estimate of the speed drop of the tsunami wave passing over the breakwater. The calculation is rigorous for a sinusoidal wave interacting with a rigid submerged obstacle, in the framework of the linear wave theory. This new approach gives a useful and simple tool for estimating tsunami loads on submerged breakwaters.An unexpected novelty come out from a workedmore » example: assuming the same wave height, storm waves are more dangerous than tsunami waves, for the safety against sliding of submerged breakwaters.« less
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
Qian, Song S; Lyons, Regan E
2006-10-01
We present a Bayesian approach for characterizing background contaminant concentration distributions using data from sites that may have been contaminated. Our method, focused on estimation, resolves several technical problems of the existing methods sanctioned by the U.S. Environmental Protection Agency (USEPA) (a hypothesis testing based method), resulting in a simple and quick procedure for estimating background contaminant concentrations. The proposed Bayesian method is applied to two data sets from a federal facility regulated under the Resource Conservation and Restoration Act. The results are compared to background distributions identified using existing methods recommended by the USEPA. The two data sets represent low and moderate levels of censorship in the data. Although an unbiased estimator is elusive, we show that the proposed Bayesian estimation method will have a smaller bias than the EPA recommended method.
David A. Tallmon; Dave Gregovich; Robin S. Waples; C. Scott Baker; Jennifer Jackson; Barbara L. Taylor; Eric Archer; Karen K. Martien; Fred W. Allendorf; Michael K. Schwartz
2010-01-01
The utility of microsatellite markers for inferring population size and trend has not been rigorously examined, even though these markers are commonly used to monitor the demography of natural populations. We assessed the ability of a linkage disequilibrium estimator of effective population size (Ne) and a simple capture-recapture estimator of abundance (N) to quantify...
Comparison of volume estimation methods for pancreatic islet cells
NASA Astrophysics Data System (ADS)
Dvořák, JiřÃ.; Å vihlík, Jan; Habart, David; Kybic, Jan
2016-03-01
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.
Coins and Costs: A Simple and Rapid Assessment of Basic Financial Knowledge
ERIC Educational Resources Information Center
Willner, Paul; Bailey, Rebecca; Dymond, Simon; Parry, Rhonwen
2011-01-01
Introduction: We describe a simple and rapid screening test for basic financial knowledge that is suitable for administration to people with mild intellectual disabilities. Method: The Coins and Costs test asks respondents to name coins, and to estimate prices of objects ranging between 1 British Pound (an ice cream) and 100K British Pounds (a…
Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2017-02-01
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A
2017-02-11
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
NASA Astrophysics Data System (ADS)
Akita, T.; Takaki, R.; Shima, E.
2012-04-01
An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.
Robustifying blind image deblurring methods by simple filters
NASA Astrophysics Data System (ADS)
Liu, Yan; Zeng, Xiangrong; Huangpeng, Qizi; Fan, Jun; Zhou, Jinglun; Feng, Jing
2016-07-01
The state-of-the-art blind image deblurring (BID) methods are sensitive to noise, and most of them can deal with only small levels of Gaussian noise. In this paper, we use simple filters to present a robust BID framework which is able to robustify exiting BID methods to high-level Gaussian noise or/and Non-Gaussian noise. Experiments on images in presence of Gaussian noise, impulse noise (salt-and-pepper noise and random-valued noise) and mixed Gaussian-impulse noise, and a real-world blurry and noisy image show that the proposed method can faster estimate sharper kernels and better images, than that obtained by other methods.
Estimation method of finger tapping dynamics using simple magnetic detection system
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo
2010-05-01
We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.
Estimation method of finger tapping dynamics using simple magnetic detection system.
Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo
2010-05-01
We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.
Improving Estimation of Ground Casualty Risk From Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, Chris L.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
Improving Estimation of Ground Casualty Risk from Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, C.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.
2003-01-01
Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.
NASA Astrophysics Data System (ADS)
Rinsoz, Thomas; Duquenne, Philippe; Greff-Mirguet, Guylaine; Oppliger, Anne
Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count non-culturable or non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescence microscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the "impaction on nutrient agar" method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria.
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
A Method of the UMTS-FDD Network Design Based on Universal Load Characteristics
NASA Astrophysics Data System (ADS)
Gajewski, Slawomir
In the paper an original method of the UMTS radio network design was presented. The method is based on simple way of capacity-coverage trade-off estimation for WCDMA/FDD radio interface. This trade-off is estimated by using universal load characteristics and normalized coverage characteristics. The characteristics are useful for any propagation environment as well as for any service performance requirements. The practical applications of these characteristics on radio network planning and maintenance were described.
Numerical approach for ECT by using boundary element method with Laplace transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enokizono, M.; Todaka, T.; Shibao, K.
1997-03-01
This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.
Simulation methods to estimate design power: an overview for applied research
2011-01-01
Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447
Estimating the R-curve from residual strength data
NASA Technical Reports Server (NTRS)
Orange, T. W.
1985-01-01
A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.
NASA Astrophysics Data System (ADS)
Shimada, M.; Sato, C.; Hoshi, Y.; Yamada, Y.
2009-08-01
Our newly developed method using spatially and time-resolved reflectances can easily estimate the absorption coefficients of each layer in a two-layered medium if the thickness of the upper layer and the reduced scattering coefficients of the two layers are known a priori. We experimentally validated this method using phantoms and examined its possibility of estimating the absorption coefficients of the tissues in human heads. In the case of a homogeneous plastic phantom (polyacetal block), the absorption coefficient estimated by our method agreed well with that obtained by a conventional method. Also, in the case of two-layered phantoms, our method successfully estimated the absorption coefficients of the two layers. Furthermore, the absorption coefficients of the extracerebral and cerebral tissue inside human foreheads were estimated under the assumption that the human heads were two-layered media. It was found that the absorption coefficients of the cerebral tissues were larger than those of the extracerebral tissues.
Lepora, Nathan F; Blomeley, Craig P; Hoyland, Darren; Bracci, Enrico; Overton, Paul G; Gurney, Kevin
2011-11-01
The study of active and passive neuronal dynamics usually relies on a sophisticated array of electrophysiological, staining and pharmacological techniques. We describe here a simple complementary method that recovers many findings of these more complex methods but relies only on a basic patch-clamp recording approach. Somatic short and long current pulses were applied in vitro to striatal medium spiny (MS) and fast spiking (FS) neurons from juvenile rats. The passive dynamics were quantified by fitting two-compartment models to the short current pulse data. Lumped conductances for the active dynamics were then found by compensating this fitted passive dynamics within the current-voltage relationship from the long current pulse data. These estimated passive and active properties were consistent with previous more complex estimations of the neuron properties, supporting the approach. Relationships within the MS and FS neuron types were also evident, including a graduation of MS neuron properties consistent with recent findings about D1 and D2 dopamine receptor expression. Application of the method to simulated neuron data supported the hypothesis that it gives reasonable estimates of membrane properties and gross morphology. Therefore detailed information about the biophysics can be gained from this simple approach, which is useful for both classification of neuron type and biophysical modelling. Furthermore, because these methods rely upon no manipulations to the cell other than patch clamping, they are ideally suited to in vivo electrophysiology. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
On-line estimation and compensation of measurement delay in GPS/SINS integration
NASA Astrophysics Data System (ADS)
Yang, Tao; Wang, Wei
2008-10-01
The chief aim of this paper is to propose a simple on-line estimation and compensation method of GPS/SINS measurement delay. The causes of time delay for GPS/SINS integration are analyzed in this paper. New Kalman filter state equations augmented by measurement delay and modified measurement equations are derived. Based on an open-loop Kalman filter, several simulations are run, results of which show that by the proposed method, the estimation and compensation error of measurement delay is below 0.1s.
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-27
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.
Validation of a partial coherence interferometry method for estimating retinal shape
Verkicharla, Pavan K.; Suheimat, Marwan; Pope, James M.; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L.; Atchison, David A.
2015-01-01
To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data. PMID:26417496
Validation of a partial coherence interferometry method for estimating retinal shape.
Verkicharla, Pavan K; Suheimat, Marwan; Pope, James M; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L; Atchison, David A
2015-09-01
To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data.
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-01
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282
Oxygen transfer rate estimation in oxidation ditches from clean water measurements.
Abusam, A; Keesman, K J; Meinema, K; Van Straten, G
2001-06-01
Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).
Non-invasive body temperature measurement of wild chimpanzees using fecal temperature decline.
Jensen, Siv Aina; Mundry, Roger; Nunn, Charles L; Boesch, Christophe; Leendertz, Fabian H
2009-04-01
New methods are required to increase our understanding of pathologic processes in wild mammals. We developed a noninvasive field method to estimate the body temperature of wild living chimpanzees habituated to humans, based on statistically fitting temperature decline of feces after defecation. The method was established with the use of control measures of human rectal temperature and subsequent changes in fecal temperature over time. The method was then applied to temperature data collected from wild chimpanzee feces. In humans, we found good correspondence between the temperature estimated by the method and the actual rectal temperature that was measured (maximum deviation 0.22 C). The method was successfully applied and the average estimated temperature of the chimpanzees was 37.2 C. This simple-to-use field method reliably estimates the body temperature of wild chimpanzees and probably also other large mammals.
NASA Astrophysics Data System (ADS)
Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.
2005-12-01
The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1988-01-01
Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.
Kim, Bongseok; Kim, Sangdong; Lee, Jonghun
2018-01-01
We propose a novel discrete Fourier transform (DFT)-based direction of arrival (DOA) estimation by a virtual array extension using simple multiplications for frequency modulated continuous wave (FMCW) radar. DFT-based DOA estimation is usually employed in radar systems because it provides the advantage of low complexity for real-time signal processing. In order to enhance the resolution of DOA estimation or to decrease the missing detection probability, it is essential to have a considerable number of channel signals. However, due to constraints of space and cost, it is not easy to increase the number of channel signals. In order to address this issue, we increase the number of effective channel signals by generating virtual channel signals using simple multiplications of the given channel signals. The increase in channel signals allows the proposed scheme to detect DOA more accurately than the conventional scheme while using the same number of channel signals. Simulation results show that the proposed scheme achieves improved DOA estimation compared to the conventional DFT-based method. Furthermore, the effectiveness of the proposed scheme in a practical environment is verified through the experiment. PMID:29758016
Optimal Bandwidth for Multitaper Spectrum Estimation
Haley, Charlotte L.; Anitescu, Mihai
2017-07-04
A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less
A simple method for estimating gross carbon budgets for vegetation in forest ecosystems.
Ryan, Michael G.
1991-01-01
Gross carbon budgets for vegetation in forest ecosystems are difficult to construct because of problems in scaling flux measurements made on small samples over short periods of time and in determining belowground carbon allocation. Recently, empirical relationships have been developed to estimate total belowground carbon allocation from litterfall, and maintenance respiration from tissue nitrogen content. I outline a method for estimating gross carbon budgets using these empirical relationships together with data readily available from ecosystem studies (aboveground wood and canopy production, aboveground wood and canopy biomass, litterfall, and tissue nitrogen contents). Estimates generated with this method are compared with annual carbon fixation estimates from the Forest-BGC model for a lodgepole pine (Pinus contorta Dougl.) and a Pacific silver fir (Abies amabilis Dougl.) chronosequence.
On Some Confidence Intervals for Estimating the Mean of a Skewed Population
ERIC Educational Resources Information Center
Shi, W.; Kibria, B. M. Golam
2007-01-01
A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…
Estimating annual bole biomass production using uncertainty analysis
Travis J. Woolley; Mark E. Harmon; Kari B. O' Connell
2007-01-01
Two common sampling methodologies coupled with a simple statistical model were evaluated to determine the accuracy and precision of annual bole biomass production (BBP) and inter-annual variability estimates using this type of approach. We performed an uncertainty analysis using Monte Carlo methods in conjunction with radial growth core data from trees in three Douglas...
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
Mortality estimation from carcass searches using the R-package carcass – a tutorial
This article is a tutorial for the R-package carcass. It starts with a short overview of common methods used to estimate mortality based on carcass searches. Then, it guides step by step through a simple example. First, the proportion of animals that fall into the search area is ...
Detection of osmotic damages in GRP boat hulls
NASA Astrophysics Data System (ADS)
Krstulović-Opara, L.; Domazet, Ž.; Garafulić, E.
2013-09-01
Infrared thermography as a tool of non-destructive testing is method enabling visualization and estimation of structural anomalies and differences in structure's topography. In presented paper problem of osmotic damage in submerged glass reinforced polymer structures is addressed. The osmotic damage can be detected by a simple humidity gauging, but for proper evaluation and estimation testing methods are restricted and hardly applicable. In this paper it is demonstrated that infrared thermography, based on estimation of heat wave propagation, can be used. Three methods are addressed; Pulsed thermography, Fast Fourier Transform and Continuous Morlet Wavelet. An additional image processing based on gradient approach is applied on all addressed methods. It is shown that the Continuous Morlet Wavelet is the most appropriate method for detection of osmotic damage.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
An empirical Bayes approach to analyzing recurring animal surveys
Johnson, D.H.
1989-01-01
Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.
Program CONTRAST--A general program for the analysis of several survival or recovery rate estimates
Hines, J.E.; Sauer, J.R.
1989-01-01
This manual describes the use of program CONTRAST, which implements a generalized procedure for the comparison of several rate estimates. This method can be used to test both simple and composite hypotheses about rate estimates, and we discuss its application to multiple comparisons of survival rate estimates. Several examples of the use of program CONTRAST are presented. Program CONTRAST will run on IBM-cimpatible computers, and requires estimates of the rates to be tested, along with associated variance and covariance estimates.
Quantum State Tomography via Linear Regression Estimation
Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan
2013-01-01
A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519
Estimating p-n Diode Bulk Parameters, Bandgap Energy and Absolute Zero by a Simple Experiment
ERIC Educational Resources Information Center
Ocaya, R. O.; Dejene, F. B.
2007-01-01
This paper presents a straightforward but interesting experimental method for p-n diode characterization. The method differs substantially from many approaches in diode characterization by offering much tighter control over the temperature and current variables. The method allows the determination of important diode constants such as temperature…
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Simulation methods to estimate design power: an overview for applied research.
Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E
2011-06-20
Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.
Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method
NASA Astrophysics Data System (ADS)
Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.
2017-08-01
Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.
2013-08-01
A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.
Measurement of Antenna Bore-Sight Gain
NASA Technical Reports Server (NTRS)
Fortinberry, Jarrod; Shumpert, Thomas
2016-01-01
The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
Perceived Difficulty of a Motor-Skill Task as a Function of Training.
ERIC Educational Resources Information Center
Bratfisch, Oswald; And Others
A simple device called a "wire labyrinth" was used in an experiment involving learning of a two-hand motor task. The Ss were asked, after completing each of 7 successive trails, to give their estimates of perceived (subjective) difficulty of the task. For this purpose, the psychophysical method of magnitude estimation was used. Time was…
A First Approach to Global Runoff Simulation using Satellite Rainfall Estimation
NASA Technical Reports Server (NTRS)
Hong, Yang; Adler, Robert F.; Hossain, Faisal; Curtis, Scott; Huffman, George J.
2007-01-01
Many hydrological models have been introduced in the hydrological literature to predict runoff but few of these have become common planning or decision-making tools, either because the data requirements are substantial or because the modeling processes are too complicated for operational application. On the other hand, progress in regional or global rainfall-runoff simulation has been constrained by the difficulty of measuring spatiotemporal variability of the primary causative factor, i.e. rainfall fluxes, continuously over space and time. Building on progress in remote sensing technology, researchers have improved the accuracy, coverage, and resolution of rainfall estimates by combining imagery from infrared, passive microwave, and space-borne radar sensors. Motivated by the recent increasing availability of global remote sensing data for estimating precipitation and describing land surface characteristics, this note reports a ballpark assessment of quasi-global runoff computed by incorporating satellite rainfall data and other remote sensing products in a relatively simple rainfall-runoff simulation approach: the Natural Resources Conservation Service (NRCS) runoff Curve Number (CN) method. Using an Antecedent Precipitation Index (API) as a proxy of antecedent moisture conditions, this note estimates time-varying NRCS-CN values determined by the 5-day normalized API. Driven by multi-year (1998-2006) Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis, quasi-global runoff was retrospectively simulated with the NRCS-CN method and compared to Global Runoff Data Centre data at global and catchment scales. Results demonstrated the potential for using this simple method when diagnosing runoff values from satellite rainfall for the globe and for medium to large river basins. This work was done with the simple NRCS-CN method as a first-cut approach to understanding the challenges that lie ahead in advancing the satellite-based inference of global runoff. We expect that the successes and limitations revealed in this study will lay the basis for applying more advanced methods to capture the dynamic variability of the global hydrologic process for global runoff monltongin real time. The essential ingredient in this work is the use of global satellite-based rainfall estimation.
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
Caught Ya! A School-Based Practical Activity to Evaluate the Capture-Mark-Release-Recapture Method
ERIC Educational Resources Information Center
Kingsnorth, Crawford; Cruickshank, Chae; Paterson, David; Diston, Stephen
2017-01-01
The capture-mark-release-recapture method provides a simple way to estimate population size. However, when used as part of ecological sampling, this method does not easily allow an opportunity to evaluate the accuracy of the calculation because the actual population size is unknown. Here, we describe a method that can be used to measure the…
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong
2016-05-01
With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.
Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra
NASA Astrophysics Data System (ADS)
Rezakhah, Saeid; Maleki, Yasaman
2016-07-01
Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-12-09
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-01-01
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705
Prentice, Ross L; Zhao, Shanshan
2018-01-01
The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.
A simple model for strong ground motions and response spectra
Safak, Erdal; Mueller, Charles; Boatwright, John
1988-01-01
A simple model for the description of strong ground motions is introduced. The model shows that response spectra can be estimated by using only four parameters of the ground motion, the RMS acceleration, effective duration and two corner frequencies that characterize the effective frequency band of the motion. The model is windowed band-limited white noise, and is developed by studying the properties of two functions, cumulative squared acceleration in the time domain, and cumulative squared amplitude spectrum in the frequency domain. Applying the methods of random vibration theory, the model leads to a simple analytical expression for the response spectra. The accuracy of the model is checked by using the ground motion recordings from the aftershock sequences of two different earthquakes and simulated accelerograms. The results show that the model gives a satisfactory estimate of the response spectra.
Simulating initial attack with two fire containment models
Romain M. Mees
1985-01-01
Given a variable rate of fireline construction and an elliptical fire growth model, two methods for estimating the required number of resources, time to containment, and the resulting fire area were compared. Five examples illustrate some of the computational differences between the simple and the complex methods. The equations for the two methods can be used and...
ERIC Educational Resources Information Center
Hayes, Andrew F.; Preacher, Kristopher J.
2010-01-01
Most treatments of indirect effects and mediation in the statistical methods literature and the corresponding methods used by behavioral scientists have assumed linear relationships between variables in the causal system. Here we describe and extend a method first introduced by Stolzenberg (1980) for estimating indirect effects in models of…
A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation
Wang, Jinfeng; Li, Hong; Fang, Zhichao
2014-01-01
We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153
Adaptive noise canceling of electrocardiogram artifacts in single channel electroencephalogram.
Cho, Sung Pil; Song, Mi Hye; Park, Young Cheol; Choi, Ho Seon; Lee, Kyoung Joung
2007-01-01
A new method for estimating and eliminating electrocardiogram (ECG) artifacts from single channel scalp electroencephalogram (EEG) is proposed. The proposed method consists of emphasis of QRS complex from EEG using least squares acceleration (LSA) filter, generation of synchronized pulse with R-peak and ECG artifacts estimation and elimination using adaptive filter. The performance of the proposed method was evaluated using simulated and real EEG recordings, we found that the ECG artifacts were successfully estimated and eliminated in comparison with the conventional multi-channel techniques, which are independent component analysis (ICA) and ensemble average (EA) method. From this we can conclude that the proposed method is useful for the detecting and eliminating the ECG artifacts from single channel EEG and simple to use for ambulatory/portable EEG monitoring system.
A simple method to estimate threshold friction velocity of wind erosion in the field
USDA-ARS?s Scientific Manuscript database
Nearly all wind erosion models require the specification of threshold friction velocity (TFV). Yet determining TFV of wind erosion in field conditions is difficult as it depends on both soil characteristics and distribution of vegetation or other roughness elements. While several reliable methods ha...
Detection of fragments from internal insects in wheat samples using a laboratory entoleter
USDA-ARS?s Scientific Manuscript database
A simple, rapid method was developed for estimating the number of insect fragments in flour caused by hidden, internal-feeding insects in whole grains during storage. The method uses a small mechanical rotary device (entoleter), which accelerates whole wheat kernels to high speeds and projects them...
Spectrophotometric estimation of ambroxol and cetirizine hydrochloride from tablet dosage form.
Gowekar, N M; Pande, V V; Kasture, A V; Tekade, A R; Chandorkar, J G
2007-07-01
Fixed dose combination tablets containing ambroxol HCl and cetirizine HCl are clinically used as mucolytic and antiallergic. Several spectrophotometric and HPLC methods have been reported for simultaneous estimation of these drugs with other drugs. The drugs individually and in mixture obeys Beer's law over conc. range 1.2-4.4 microg/mL for cetirizine HCL and for ambroxol HCL 15-52 microg/mL at all five sampling wavelengths (correlation coeff. well above 0.995). The mean recoveries from tablet by standard addition method were 100.18% (+/-2.4) and 100.66 % (+/-2.31). The present work reports simple, accurate and precise spectrophotometric methods for their simultaneous estimation from tablet dosage form.
Titrimetric and photometric methods for determination of hypochlorite in commercial bleaches.
Jonnalagadda, Sreekanth B; Gengan, Prabhashini
2010-01-01
Two methods, simple titration and photometric methods for determination of hypochlorite are developed, based its reaction with hydrogen peroxide and titration of the residual peroxide by acidic permanganate. In the titration method, the residual hydrogen peroxide is estimated by titration with standard permanganate solution to estimate the hypochlorite concentration. The photometric method is devised to measure the concentration of remaining permanganate, after the reaction with residual hydrogen peroxide. It employs 4 ranges of calibration curves to enable the determination of hypochlorite accurately. The new photometric method measures hypochlorite in the range 1.90 x 10(-3) to 1.90 x 10(-2) M, with high accuracy and with low variance. The concentrations of hypochlorite in diverse commercial bleach samples and in seawater which is enriched with hypochlorite were estimated using the proposed method and compared with the arsenite method. The statistical analysis validates the superiority of the proposed method.
Vehicle Lateral State Estimation Based on Measured Tyre Forces
Tuononen, Ari J.
2009-01-01
Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Vadas, P A; Good, L W; Moore, P A; Widman, N
2009-01-01
Nonpoint-source pollution of fresh waters by P is a concern because it contributes to accelerated eutrophication. Given the state of the science concerning agricultural P transport, a simple tool to quantify annual, field-scale P loss is a realistic goal. We developed new methods to predict annual dissolved P loss in runoff from surface-applied manures and fertilizers and validated the methods with data from 21 published field studies. We incorporated these manure and fertilizer P runoff loss methods into an annual, field-scale P loss quantification tool that estimates dissolved and particulate P loss in runoff from soil, manure, fertilizer, and eroded sediment. We validated the P loss tool using independent data from 28 studies that monitored P loss in runoff from a variety of agricultural land uses for at least 1 yr. Results demonstrated (i) that our new methods to estimate P loss from surface manure and fertilizer are an improvement over methods used in existing Indexes, and (ii) that it was possible to reliably quantify annual dissolved, sediment, and total P loss in runoff using relatively simple methods and readily available inputs. Thus, a P loss quantification tool that does not require greater degrees of complexity or input data than existing P Indexes could accurately predict P loss across a variety of management and fertilization practices, soil types, climates, and geographic locations. However, estimates of runoff and erosion are still needed that are accurate to a level appropriate for the intended use of the quantification tool.
Comparison of mode estimation methods and application in molecular clock analysis
NASA Technical Reports Server (NTRS)
Hedges, S. Blair; Shah, Prachi
2003-01-01
BACKGROUND: Distributions of time estimates in molecular clock studies are sometimes skewed or contain outliers. In those cases, the mode is a better estimator of the overall time of divergence than the mean or median. However, different methods are available for estimating the mode. We compared these methods in simulations to determine their strengths and weaknesses and further assessed their performance when applied to real data sets from a molecular clock study. RESULTS: We found that the half-range mode and robust parametric mode methods have a lower bias than other mode methods under a diversity of conditions. However, the half-range mode suffers from a relatively high variance and the robust parametric mode is more susceptible to bias by outliers. We determined that bootstrapping reduces the variance of both mode estimators. Application of the different methods to real data sets yielded results that were concordant with the simulations. CONCLUSION: Because the half-range mode is a simple and fast method, and produced less bias overall in our simulations, we recommend the bootstrapped version of it as a general-purpose mode estimator and suggest a bootstrap method for obtaining the standard error and 95% confidence interval of the mode.
Transient Stability Output Margin Estimation Based on Energy Function Method
NASA Astrophysics Data System (ADS)
Miwa, Natsuki; Tanaka, Kazuyuki
In this paper, a new method of estimating critical generation margin (CGM) in power systems is proposed from the viewpoint of transient stability diagnostic. The proposed method has the capability to directly compute the stability limit output for a given contingency based on transient energy function method (TEF). Since CGM can be directly obtained by the limit output using estimated P-θ curves and is easy to understand, it is more useful rather than conventional critical clearing time (CCT) of energy function method. The proposed method can also estimate CGM as its negative value that means unstable in present load profile, then negative CGM can be directly utilized as generator output restriction. The proposed method is verified its accuracy and fast solution ability by applying to simple 3-machine model and IEEJ EAST10-machine standard model. Furthermore the useful application to severity ranking of transient stability for a lot of contingency cases is discussed by using CGM.
Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.
Pillai, S; Singhvi, I
2008-09-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.
Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation
Pillai, S.; Singhvi, I.
2008-01-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.
Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N
2017-04-01
There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.
Two methods for parameter estimation using multiple-trait models and beef cattle field data.
Bertrand, J K; Kriese, L A
1990-08-01
Two methods are presented for estimating variances and covariances from beef cattle field data using multiple-trait sire models. Both methods require that the first trait have no missing records and that the contemporary groups for the second trait be subsets of the contemporary groups for the first trait; however, the second trait may have missing records. One method uses pseudo expectations involving quadratics composed of the solutions and the right-hand sides of the mixed model equations. The other method is an extension of Henderson's Simple Method to the multiple trait case. Neither of these methods requires any inversions of large matrices in the computation of the parameters; therefore, both methods can handle very large sets of data. Four simulated data sets were generated to evaluate the methods. In general, both methods estimated genetic correlations and heritabilities that were close to the Restricted Maximum Likelihood estimates and the true data set values, even when selection within contemporary groups was practiced. The estimates of residual correlations by both methods, however, were biased by selection. These two methods can be useful in estimating variances and covariances from multiple-trait models in large populations that have undergone a minimal amount of selection within contemporary groups.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Hyekyun
Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation ofmore » the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior projections over more than 60° appear to be necessary for reliable estimations. The mean 3D RMSE during beam delivery after the simple linear model had established with a prior 90° projection data was 0.42 mm for VMAT and 0.45 mm for IMRT. Conclusions: The proposed method does not require any internal/external correlation or statistical modeling to estimate the target trajectory and can be used for both retrospective image-guided radiotherapy with CBCT projection images and real-time target position monitoring for respiratory gating or tracking.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Vinita J.; Schaefer, Charles; Kahnhauser, Henry
The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was shut down in September 2014. Lead bricks used as radiological shadow shielding within the accelerator were exposed to stray radiation fields during normal operations. The FLUKA code, a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter, was used to estimate induced radioactivity in this shielding and stainless steel beam pipe from known beam losses. The FLUKA output was processed using MICROSHIELD® to estimate on-contact exposure rates with individually exposed bricks to help design and optimize the radiological survey process. Thismore » entire process can be modeled using FLUKA, but use of MICROSHIELD® as a secondary method was chosen because of the project’s resource constraints. Due to the compressed schedule and lack of shielding configuration data, simple FLUKA models were developed in this paper. FLUKA activity estimates for stainless steel were compared with sampling data to validate results, which show that simple FLUKA models and irradiation geometries can be used to predict radioactivity inventories accurately in exposed materials. During decommissioning 0.1% of the lead bricks were found to have measurable levels of induced radioactivity. Finally, post-processing with MICROSHIELD® provides an acceptable secondary method of estimating residual exposure rates.« less
Registration using natural features for augmented reality systems.
Yuan, M L; Ong, S K; Nee, A Y C
2006-01-01
Registration is one of the most difficult problems in augmented reality (AR) systems. In this paper, a simple registration method using natural features based on the projective reconstruction technique is proposed. This method consists of two steps: embedding and rendering. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In rendering, the Kanade-Lucas-Tomasi (KLT) feature tracker is used to track the natural feature correspondences in the live video. The natural features that have been tracked are used to estimate the corresponding projective matrix in the image sequence. Next, the projective reconstruction technique is used to transfer the four specified points to compute the registration matrix for augmentation. This paper also proposes a robust method for estimating the projective matrix, where the natural features that have been tracked are normalized (translation and scaling) and used as the input data. The estimated projective matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. The proposed registration method has three major advantages: 1) It is simple, as no predefined fiducials or markers are used for registration for either indoor and outdoor AR applications. 2) It is robust, because it remains effective as long as at least six natural features are tracked during the entire augmentation, and the existence of the corresponding projective matrices in the live video is guaranteed. Meanwhile, the robust method to estimate the projective matrix can obtain stable results even when there are some outliers during the tracking process. 3) Virtual objects can still be superimposed on the specified areas, even if some parts of the areas are occluded during the entire process. Some indoor and outdoor experiments have been conducted to validate the performance of this proposed method.
An analysis of estimation of pulmonary blood flow by the single-breath method
NASA Technical Reports Server (NTRS)
Srinivasan, R.
1986-01-01
The single-breath method represents a simple noninvasive technique for the assessment of capillary blood flow across the lung. However, this method has not gained widespread acceptance, because its accuracy is still being questioned. A rigorous procedure is described for estimating pulmonary blood flow (PBF) using data obtained with the aid of the single-breath method. Attention is given to the minimization of data-processing errors in the presence of measurement errors and to questions regarding a correction for possible loss of CO2 in the lung tissue. It is pointed out that the estimations are based on the exact solution of the underlying differential equations which describe the dynamics of gas exchange in the lung. The reported study demonstrates the feasibility of obtaining highly reliable estimates of PBF from expiratory data in the presence of random measurement errors.
Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia
2016-06-01
To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.
Wall shear stress estimates in coronary artery constrictions
NASA Technical Reports Server (NTRS)
Back, L. H.; Crawford, D. W.
1992-01-01
Wall shear stress estimates from laminar boundary layer theory were found to agree fairly well with the magnitude of shear stress levels along coronary artery constrictions obtained from solutions of the Navier Stokes equations for both steady and pulsatile flow. The relatively simple method can be used for in vivo estimates of wall shear stress in constrictions by using a vessel shape function determined from a coronary angiogram, along with a knowledge of the flow rate.
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Estimating Starch Content in Roots of Deciduous Trees--A Visual Technique
Philip M. Wargo; Philip M. Wargo
1975-01-01
A visual technique for determining starch content in roots of forest trees, based onz iodine-staining of starch granules, was compared with a chemical method. Although the chemical method was more precise, roots could be sorted with the visual method into groups that are probably biologically important. The visual technique is simple and can be adapted for use in the...
Cauchemez, Simon; Epperson, Scott; Biggerstaff, Matthew; Swerdlow, David; Finelli, Lyn; Ferguson, Neil M
2013-01-01
Prior to emergence in human populations, zoonoses such as SARS cause occasional infections in human populations exposed to reservoir species. The risk of widespread epidemics in humans can be assessed by monitoring the reproduction number R (average number of persons infected by a human case). However, until now, estimating R required detailed outbreak investigations of human clusters, for which resources and expertise are not always available. Additionally, existing methods do not correct for important selection and under-ascertainment biases. Here, we present simple estimation methods that overcome many of these limitations. Our approach is based on a parsimonious mathematical model of disease transmission and only requires data collected through routine surveillance and standard case investigations. We apply it to assess the transmissibility of swine-origin influenza A H3N2v-M virus in the US, Nipah virus in Malaysia and Bangladesh, and also present a non-zoonotic example (cholera in the Dominican Republic). Estimation is based on two simple summary statistics, the proportion infected by the natural reservoir among detected cases (G) and among the subset of the first detected cases in each cluster (F). If detection of a case does not affect detection of other cases from the same cluster, we find that R can be estimated by 1-G; otherwise R can be estimated by 1-F when the case detection rate is low. In more general cases, bounds on R can still be derived. We have developed a simple approach with limited data requirements that enables robust assessment of the risks posed by emerging zoonoses. We illustrate this by deriving transmissibility estimates for the H3N2v-M virus, an important step in evaluating the possible pandemic threat posed by this virus. Please see later in the article for the Editors' Summary.
Salonen, K; Leisola, M; Eerikäinen, T
2009-01-01
Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.
Berlin, R H; Janzon, B; Rybeck, B; Schantz, B; Seeman, T
1982-01-01
A standard methodology for estimating the energy transfer characteristics of small calibre bullets and other fast missiles is proposed, consisting of firings against targets made of soft soap. The target is evaluated by measuring the size of the permanent cavity remaining in it after the shot. The method is very simple to use and does not require access to any sophisticated measuring equipment. It can be applied under all circumstances, even under field conditions. Adequate methods of calibration to ensure good accuracy are suggested. The precision and limitations of the method are discussed.
Forecasting peaks of seasonal influenza epidemics.
Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John
2013-06-21
We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
NASA Astrophysics Data System (ADS)
Yang, A.; Yongtao, F.
2016-12-01
The effective elastic thickness (Te) is an important parameter that characterizes the long term strength of the lithosphere, which has great significance on understanding the mechanical properties and evolution of the lithosphere. In contrast with many controversies regarding elastic thickness of continent lithosphere, the Te of oceanic lithosphere is thought to be in a simple way that is dependent on the age of the plate. However, rescent studies show that there is no simple relationship between Te and age at time of loading for both seamounts and subduction zones. As subsurface loading is very importand and has large influence in the estimate of Te for continent lithosphere, and many oceanic features such as subduction zones also have considerable subsurface loading. We introduce the method to estimate the effective elastic thickness of oceanic lithosphere using model including surface and subsurface loads by using free-air gravity anomaly and bathymetric data, together with a moving window admittance technique (MWAT). We use the multitaper spectral estimation method to calculate the power spectral density. Through tests with synthetic subduction zone like bathymetry and gravity data show that the Te can be recovered in an accurance similar to that in the continent and there is also a trade-off between spatial resolution and variance for different window sizes. We estimate Te of many subduction zones (Peru-Chile trench, Middle America trench, Caribbean trench, Kuril-Japan trench, Mariana trench, Tonga trench, Java trench, Ryukyu-Philippine trench) with an age range of 0-160 Myr to reassess the relationship between elastic thickness and the age of the lithosphere at the time of loading. The results do not show a simple relationship between Te and age.
High-speed autofocusing of a cell using diffraction pattern
NASA Astrophysics Data System (ADS)
Oku, Hiromasa; Ishikawa, Masatoshi; Theodorus; Hashimoto, Koichi
2006-05-01
This paper proposes a new autofocusing method for observing cells under a transmission illumination. The focusing method uses a quick and simple focus estimation technique termed “depth from diffraction,” which is based on a diffraction pattern in a defocused image of a biological specimen. Since this method can estimate the focal position of the specimen from only a single defocused image, it can easily realize high-speed autofocusing. To demonstrate the method, it was applied to continuous focus tracking of a swimming paramecium, in combination with two-dimensional position tracking. Three-dimensional tracking of the paramecium for 70 s was successfully demonstrated.
A method for direct measurement of the first-order mass moments of human body segments.
Fujii, Yusaku; Shimada, Kazuhito; Maru, Koichi; Ozawa, Junichi; Lu, Rong-Sheng
2010-01-01
We propose a simple and direct method for measuring the first-order mass moment of a human body segment. With the proposed method, the first-order mass moment of the body segment can be directly measured by using only one precision scale and one digital camera. In the dummy mass experiment, the relative standard uncertainty of a single set of measurements of the first-order mass moment is estimated to be 1.7%. The measured value will be useful as a reference for evaluating the uncertainty of the body segment inertial parameters (BSPs) estimated using an indirect method.
Bayesian Analysis of Longitudinal Data Using Growth Curve Models
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.
2007-01-01
Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…
NASA Technical Reports Server (NTRS)
Staubert, R.
1985-01-01
Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.
Moderation analysis with missing data in the predictors.
Zhang, Qian; Wang, Lijuan
2017-12-01
The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Ultimate Longitudinal Strength of Composite Ship Hulls
NASA Astrophysics Data System (ADS)
Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen
2017-01-01
A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.
A simple formula for estimating Stark widths of neutral lines. [of stellar atmospheres
NASA Technical Reports Server (NTRS)
Freudenstein, S. A.; Cooper, J.
1978-01-01
A simple formula for the prediction of Stark widths of neutral lines similar to the semiempirical method of Griem (1968) for ion lines is presented. This formula is a simplification of the quantum-mechanical classical path impact theory and can be used for complicated atoms for which detailed calculations are not readily available, provided that the effective position of the closest interacting level is known. The expression does not require the use of a computer. The formula has been applied to a limited number of neutral lines of interest, and the width obtained is compared with the much more complete calculations of Bennett and Griem (1971). The agreement generally is well within 50% of the published value for the lines investigated. Comparisons with other formulas are also made. In addition, a simple estimate for the ion-broadening parameter is given.
An extension of the Saltykov method to quantify 3D grain size distributions in mylonites
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio
2016-12-01
The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.
The simple procedure for the fluxgate magnetometers calibration
NASA Astrophysics Data System (ADS)
Marusenkov, Andriy
2014-05-01
The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.
Helicopter rotor and engine sizing for preliminary performance estimation
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Bowles, J. V.; Lee, H. C.
1986-01-01
Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.
On estimation of secret message length in LSB steganography in spatial domain
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2004-06-01
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.
A simple tool for estimating city-wide annual electrical energy savings from cooler surfaces
Pomerantz, Melvin; Rosado, Pablo J.; Levinson, Ronnen
2015-07-26
We present a simple method to estimate the maximum possible energy saving that might be achieved by increasing the albedo of surfaces in a large city. We restrict this to the "indirect effect", the cooling of outside air that lessens the demand for air conditioning (AC). Given the power demand of the electric utilities and data about the city, we can use a single linear equation to estimate the maximum saving. For example, the result for an albedo change of 0.2 of pavements in a typical warm city in California, such as Sacramento, is that the saving is less thanmore » about 2 kWh per m 2 per year. This may help decision makers choose which heat island mitigation techniques are economical from an energy-saving perspective.« less
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Huang, Huafeng; Colabello, Diane M.; Sklute, Elizabeth C.; ...
2017-04-23
The absolute absorption coefficient, α(E), is a critical design parameter for devices using semiconductors for light harvesting associated with renewable energy production, both for classic technologies such as photovoltaics and for emerging technologies such as direct solar fuel production. While α(E) is well-known for many classic simple semiconductors used in photovoltaic applications, the absolute values of α(E) are typically unknown for the complex semiconductors being explored for solar fuel production due to the absence of single crystals or crystalline epitaxial films that are needed for conventional methods of determining α(E). In this work, a simple self-referenced method for estimating bothmore » the refractive indices, n(E), and absolute absorption coefficients, α(E), for loose powder samples using diffuse reflectance data is demonstrated. In this method, the sample refractive index can be deduced by refining n to maximize the agreement between the relative absorption spectrum calculated from bidirectional reflectance data (calculated through a Hapke transform which depends on n) and integrating sphere diffuse reflectance data (calculated through a Kubleka–Munk transform which does not depend on n). This new method can be quickly used to screen the suitability of emerging semiconductor systems for light-harvesting applications. The effectiveness of this approach is tested using the simple classic semiconductors Ge and Fe 2O 3 as well as the complex semiconductors La 2MoO 5 and La 4Mo 2O 11. The method is shown to work well for powders with a narrow size distribution (exemplified by Fe 2O 3) and to be ineffective for semiconductors with a broad size distribution (exemplified by Ge). As such, it provides a means for rapidly estimating the absolute optical properties of complex solids which are only available as loose powders.« less
A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions
NASA Astrophysics Data System (ADS)
Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.
2016-12-01
In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
Coherent Anomaly Method Calculation on the Cluster Variation Method. II.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
Design and analysis of simple choice surveys for natural resource management
Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.
2010-01-01
We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.
Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.
King, Leandra; Wakeley, John
2016-09-01
We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.
Patient-specific lean body mass can be estimated from limited-coverage computed tomography images.
Devriese, Joke; Beels, Laurence; Maes, Alex; van de Wiele, Christophe; Pottel, Hans
2018-06-01
In PET/CT, quantitative evaluation of tumour metabolic activity is possible through standardized uptake values, usually normalized for body weight (BW) or lean body mass (LBM). Patient-specific LBM can be estimated from whole-body (WB) CT images. As most clinical indications only warrant PET/CT examinations covering head to midthigh, the aim of this study was to develop a simple and reliable method to estimate LBM from limited-coverage (LC) CT images and test its validity. Head-to-toe PET/CT examinations were retrospectively retrieved and semiautomatically segmented into tissue types based on thresholding of CT Hounsfield units. LC was obtained by omitting image slices. Image segmentation was validated on the WB CT examinations by comparing CT-estimated BW with actual BW, and LBM estimated from LC images were compared with LBM estimated from WB images. A direct method and an indirect method were developed and validated on an independent data set. Comparing LBM estimated from LC examinations with estimates from WB examinations (LBMWB) showed a significant but limited bias of 1.2 kg (direct method) and nonsignificant bias of 0.05 kg (indirect method). This study demonstrates that LBM can be estimated from LC CT images with no significant difference from LBMWB.
Efficient multidimensional regularization for Volterra series estimation
NASA Astrophysics Data System (ADS)
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Measuring an entropy in heavy ion collisions
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Wosiek, J.
1999-03-01
We propose to use the coincidence method of Ma to measure an entropy of the system created in heavy ion collisions. Moreover we estimate, in a simple model, the values of parameters for which the thermodynamical behaviour sets in.
3PE: A Tool for Estimating Groundwater Flow Vectors
Evaluation of hydraulic gradients and the associated groundwater flow rates and directions is a fundamental aspect of hydrogeologic characterization. Many methods, ranging in complexity from simple three-point solution techniques to complex numerical models of groundwater flow, ...
Assessment of pollutant mean concentrations in the Yangtze estuary based on MSN theory.
Ren, Jing; Gao, Bing-Bo; Fan, Hai-Mei; Zhang, Zhi-Hong; Zhang, Yao; Wang, Jin-Feng
2016-12-15
Reliable assessment of water quality is a critical issue for estuaries. Nutrient concentrations show significant spatial distinctions between areas under the influence of fresh-sea water interaction and anthropogenic effects. For this situation, given the limitations of general mean estimation approaches, a new method for surfaces with non-homogeneity (MSN) was applied to obtain optimized linear unbiased estimations of the mean nutrient concentrations in the study area in the Yangtze estuary from 2011 to 2013. Other mean estimation methods, including block Kriging (BK), simple random sampling (SS) and stratified sampling (ST) inference, were applied simultaneously for comparison. Their performance was evaluated by estimation error. The results show that MSN had the highest accuracy, while SS had the highest estimation error. ST and BK were intermediate in terms of their performance. Thus, MSN is an appropriate method that can be adopted to reduce the uncertainty of mean pollutant estimation in estuaries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimation of Skidding Offered by Ackermann Mechanism
NASA Astrophysics Data System (ADS)
Rao, Are Padma; Venkatachalam, Rapur
2016-04-01
Steering for a four wheeler is being provided by Ackermann mechanism. Though it cannot always provide correct steering conditions, it is very popular because of its simple nature. A correct steering would avoid skidding of the tires, and thereby enhance their lives as the wear of the tires is reduced. In this paper it is intended to analyze Ackermann mechanism for its performance. A method of estimating skidding due to improper steering is proposed. Two parameters are identified using which the length of skidding can be estimated.
Safaei-Asl, Afshin; Enshaei, Mercede; Heydarzadeh, Abtin; Maleknejad, Shohreh
2016-01-01
Assessment of glomerular filtration rate (GFR) is an important tool for monitoring renal function. Regarding to limitations in available methods, we intended to calculate GFR by cystatin C (Cys C) based formulas and determine correlation rate of them with current methods. We studied 72 children (38 boys and 34 girls) with renal disorders. The 24 hour urinary creatinine (Cr) clearance was the gold standard method. GFR was measured with Schwartz formula and Cys C-based formulas (Grubb, Hoek, Larsson and Simple). Then correlation rates of these formulas were determined. Using Pearson correlation coefficient, a significant positive correlation between all formulas and the standard method was seen (R(2) for Schwartz, Hoek, Larsson, Grubb and Simple formula was 0.639, 0.722, 0.705, 0.712, 0.722, respectively) (P<0.001). Cys C-based formulas could predict the variance of standard method results with high power. These formulas had correlation with Schwarz formula by R(2) 0.62-0.65 (intermediate correlation). Using linear regression and constant (y-intercept), it revealed that Larsson, Hoek and Grubb formulas can estimate GFR amounts with no statistical difference compared with standard method; but Schwartz and Simple formulas overestimate GFR. This study shows that Cys C-based formulas have strong relationship with 24 hour urinary Cr clearance. Hence, they can determine GFR in children with kidney injury, easier and with enough accuracy. It helps the physician to diagnosis of renal disease in early stages and improves the prognosis.
Joy, Abraham; Anim-Danso, Emmanuel; Kohn, Joachim
2009-01-01
Methods for the detection and estimation of diphosgene and triphosgene are described. These compounds are widely used phosgene precursors which produce an intensely colored purple pentamethine oxonol dye when reacted with 1,3-dimethylbarbituric acid (DBA) and pyridine (or a pyridine derivative). Two quantitative methods are described, based on either UV absorbance or fluorescence of the oxonol dye. Detection limits are ~ 4 µmol/L by UV and <0.4 µmol/L by fluorescence. The third method is a test strip for the simple and rapid detection and semi-quantitative estimation of diphosgene and triphosgene, using a filter paper embedded with dimethylbarbituric acid and poly(4-vinylpyridine). Addition of a test solution to the paper causes a color change from white to light blue at low concentrations and to pink at higher concentrations of triphosgene. The test strip is useful for quick on-site detection of triphosgene and diphosgene in reaction mixtures. The test strip is easy to perform and provides clear signal readouts indicative of the presence of phosgene precursors. The utility of this method was demonstrated by the qualitative determination of residual triphosgene during the production of poly(Bisphenol A carbonate). PMID:19782219
ERIC Educational Resources Information Center
Samejima, Fumiko
The rationale behind the method of estimating the operating characteristics of discrete item responses when the test information of the Old Test is not constant was presented previously. In the present study, two subtests of the Old Test, i.e. Subtests 1, and 2, each of which has a different non-constant test information function, are used in…
Modulational estimate for the maximal Lyapunov exponent in Fermi-Pasta-Ulam chains
NASA Astrophysics Data System (ADS)
Dauxois, Thierry; Ruffo, Stefano; Torcini, Alessandro
1997-12-01
In the framework of the Fermi-Pasta-Ulam (FPU) model, we show a simple method to give an accurate analytical estimation of the maximal Lyapunov exponent at high energy density. The method is based on the computation of the mean value of the modulational instability growth rates associated to unstable modes. Moreover, we show that the strong stochasticity threshold found in the β-FPU system is closely related to a transition in tangent space, the Lyapunov eigenvector being more localized in space at high energy.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Radiance and atmosphere propagation-based method for the target range estimation
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan
2012-06-01
Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.
Dosimetry in x-ray-based breast imaging
Dance, David R; Sechopoulos, Ioannis
2016-01-01
The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable. PMID:27617767
Dosimetry in x-ray-based breast imaging
NASA Astrophysics Data System (ADS)
Dance, David R.; Sechopoulos, Ioannis
2016-10-01
The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable.
Meta-Analysis of Rare Binary Adverse Event Data
Bhaumik, Dulal K.; Amatya, Anup; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D.
2013-01-01
We examine the use of fixed-effects and random-effects moment-based meta-analytic methods for analysis of binary adverse event data. Special attention is paid to the case of rare adverse events which are commonly encountered in routine practice. We study estimation of model parameters and between-study heterogeneity. In addition, we examine traditional approaches to hypothesis testing of the average treatment effect and detection of the heterogeneity of treatment effect across studies. We derive three new methods, simple (unweighted) average treatment effect estimator, a new heterogeneity estimator, and a parametric bootstrapping test for heterogeneity. We then study the statistical properties of both the traditional and new methods via simulation. We find that in general, moment-based estimators of combined treatment effects and heterogeneity are biased and the degree of bias is proportional to the rarity of the event under study. The new methods eliminate much, but not all of this bias. The various estimators and hypothesis testing methods are then compared and contrasted using an example dataset on treatment of stable coronary artery disease. PMID:23734068
NASA Astrophysics Data System (ADS)
Kuzenov, V. V.; Ryzhkov, S. V.
2017-02-01
The paper formulated engineering and physical mathematical model for aerothermodynamics hypersonic flight vehicle (HFV) in laminar and turbulent boundary layers (model designed for an approximate estimate of the convective heat flow in the range of speeds M = 6-28, and height H = 20-80 km). 2D versions of calculations of convective heat flows for bodies of simple geometric forms (individual elements of the design HFV) are presented.
Pandit, Jaideep J; Tavare, Aniket
2011-07-01
It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.
Dripps, W.R.; Bradbury, K.R.
2007-01-01
Quantifying the spatial and temporal distribution of natural groundwater recharge is usually a prerequisite for effective groundwater modeling and management. As flow models become increasingly utilized for management decisions, there is an increased need for simple, practical methods to delineate recharge zones and quantify recharge rates. Existing models for estimating recharge distributions are data intensive, require extensive parameterization, and take a significant investment of time in order to establish. The Wisconsin Geological and Natural History Survey (WGNHS) has developed a simple daily soil-water balance (SWB) model that uses readily available soil, land cover, topographic, and climatic data in conjunction with a geographic information system (GIS) to estimate the temporal and spatial distribution of groundwater recharge at the watershed scale for temperate humid areas. To demonstrate the methodology and the applicability and performance of the model, two case studies are presented: one for the forested Trout Lake watershed of north central Wisconsin, USA and the other for the urban-agricultural Pheasant Branch Creek watershed of south central Wisconsin, USA. Overall, the SWB model performs well and presents modelers and planners with a practical tool for providing recharge estimates for modeling and water resource planning purposes in humid areas. ?? Springer-Verlag 2007.
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach
Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.
2017-01-01
SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250
Mortality estimation from carcass searches using the R-package carcass: a tutorial
Korner-Nievergelt, Fränzi; Behr, Oliver; Brinkmann, Robert; Etterson, Matthew A.; Huso, Manuela M. P.; Dalthorp, Daniel; Korner-Nievergelt, Pius; Roth, Tobias; Niermann, Ivo
2015-01-01
This article is a tutorial for the R-package carcass. It starts with a short overview of common methods used to estimate mortality based on carcass searches. Then, it guides step by step through a simple example. First, the proportion of animals that fall into the search area is estimated. Second, carcass persistence time is estimated based on experimental data. Third, searcher efficiency is estimated. Fourth, these three estimated parameters are combined to obtain the probability that an animal killed is found by an observer. Finally, this probability is used together with the observed number of carcasses found to obtain an estimate for the total number of killed animals together with a credible interval.
Prasad, Thatipamula R; Joseph, Siji; Kole, Prashant; Kumar, Anoop; Subramanian, Murali; Rajagopalan, Sudha; Kr, Prabhakar
2017-11-01
Objective of the current work was to develop a 'green chemistry' compliant selective and sensitive supercritical fluid chromatography-tandem mass spectrometry method for simultaneous estimation of risperidone (RIS) and its chiral metabolites in rat plasma. Methodology & results: Agilent 1260 Infinity analytical supercritical fluid chromatography system resolved RIS and its chiral metabolites within runtime of 6 min using a gradient chromatography method. Using a simple protein precipitation sample preparation followed by mass spectrometric detection achieved a sensitivity of 0.92 nM (lower limit of quantification). With linearity over four log units (0.91-7500 nM), the method was found to be selective, accurate, precise and robust. The method was validated and was successfully applied for simultaneous estimation of RIS and 9-hydroxyrisperidone metabolites (R & S individually) after intravenous and per oral administration to rats.
Shaila, Mulki; Pai, G Prakash; Shetty, Pushparaj
2013-01-01
To evaluate the salivary protein concentration in gingivitis and periodontitis patients and compare the parameters like salivary total protein, salivary albumin, salivary flow rate, pH, buffer capacity and flow rate in both young and elderly patients with simple methods. One hundred and twenty subjects were grouped based on their age as young and elderly. Each group was subgrouped (20 subjects) as controls, gingivitis and periodontitis. Unstimulated whole saliva was collected from patients and flow rate was noted down during collection of the sample. Salivary protein estimation was done using the Biuret method and salivary albumin was assessed using the Bromocresol green method. pH was estimated with a pHmeter and buffering capacity was analyzed with the titration method. Student's t-test, Fisher's test (ANOVA) and Tukey HSD (ANOVA) tests were used for statistical analysis. A very highly significant rise in the salivary total protein and albumin concentration was noted in gingivitis and periodontitis subjects of both young and elderly. An overall decrease in salivary flow rate was observed among the elderly, and also the salivary flow rate of women was significantly lower than that of men. Significant associations between salivary total protein and albumin in gingivitis and periodontitis were found with simple biochemical tests. A decrease in salivary flow rate among elderly and among women was noted.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
Noninvasive control of dental calculus removal: qualification of two fluorescence methods
NASA Astrophysics Data System (ADS)
Gonchukov, S.; Sukhinina, A.; Bakhmutov, D.; Biryukova, T.
2013-02-01
The main condition of periodontitis prevention is the full calculus removal from the teeth surface. This procedure should be fulfilled without harming adjacent unaffected tooth tissues. Nevertheless the problem of sensitive and precise estimating of tooth-calculus interface exists and potential risk of hard tissue damage remains. In this work it was shown that fluorescence diagnostics during calculus removal can be successfully used for precise noninvasive detection of calculus-tooth interface. In so doing the simple implementation of this method free from the necessity of spectrometer using can be employed. Such a simple implementation of calculus detection set-up can be aggregated with the devices of calculus removing.
Learning investment indicators through data extension
NASA Astrophysics Data System (ADS)
Dvořák, Marek
2017-07-01
Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data.
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
Estimating evapotranspiration in natural and constructed wetlands
Lott, R. Brandon; Hunt, Randall J.
2001-01-01
Difficulties in accurately calculating evapotranspiration (ET) in wetlands can lead to inaccurate water balances—information important for many compensatory mitigation projects. Simple meteorological methods or off-site ET data often are used to estimate ET, but these approaches do not include potentially important site-specific factors such as plant community, root-zone water levels, and soil properties. The objective of this study was to compare a commonly used meterological estimate of potential evapotranspiration (PET) with direct measurements of ET (lysimeters and water-table fluctuations) and small-scale root-zone geochemistry in a natural and constructed wetland system. Unlike what has been commonly noted, the results of the study demonstrated that the commonly used Penman combination method of estimating PET underestimated the ET that was measured directly in the natural wetland over most of the growing season. This result is likely due to surface heterogeneity and related roughness efffects not included in the simple PET estimate. The meterological method more closely approximated season-long measured ET rates in the constructed wetland but may overestimate the ET rate late in the growing season. ET rates also were temporally variable in wetlands over a range of time scales because they can be influenced by the relation of the water table to the root zone and the timing of plant senescence. Small-scale geochemical sampling of the shallow root zone was able to provide an independent evaluation of ET rates, supporting the identification of higher ET rates in the natural wetlands and differences in temporal ET rates due to the timing of senescence. These discrepancies illustrate potential problems with extrapolating off-site estimates of ET or single measurements of ET from a site over space or time.
Estimation of uncertainty in tracer gas measurement of air change rates.
Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio
2010-12-01
Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of <33%. Using this method, overestimation of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.
Induced Radioactivity in Lead Shielding at the National Synchrotron Light Source
Ghosh, Vinita J.; Schaefer, Charles; Kahnhauser, Henry
2017-06-30
The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was shut down in September 2014. Lead bricks used as radiological shadow shielding within the accelerator were exposed to stray radiation fields during normal operations. The FLUKA code, a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter, was used to estimate induced radioactivity in this shielding and stainless steel beam pipe from known beam losses. The FLUKA output was processed using MICROSHIELD® to estimate on-contact exposure rates with individually exposed bricks to help design and optimize the radiological survey process. Thismore » entire process can be modeled using FLUKA, but use of MICROSHIELD® as a secondary method was chosen because of the project’s resource constraints. Due to the compressed schedule and lack of shielding configuration data, simple FLUKA models were developed in this paper. FLUKA activity estimates for stainless steel were compared with sampling data to validate results, which show that simple FLUKA models and irradiation geometries can be used to predict radioactivity inventories accurately in exposed materials. During decommissioning 0.1% of the lead bricks were found to have measurable levels of induced radioactivity. Finally, post-processing with MICROSHIELD® provides an acceptable secondary method of estimating residual exposure rates.« less
Induced Radioactivity in Lead Shielding at the National Synchrotron Light Source.
Ghosh, Vinita J; Schaefer, Charles; Kahnhauser, Henry
2017-06-01
The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was shut down in September 2014. Lead bricks used as radiological shadow shielding within the accelerator were exposed to stray radiation fields during normal operations. The FLUKA code, a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter, was used to estimate induced radioactivity in this shielding and stainless steel beam pipe from known beam losses. The FLUKA output was processed using MICROSHIELD® to estimate on-contact exposure rates with individually exposed bricks to help design and optimize the radiological survey process. This entire process can be modeled using FLUKA, but use of MICROSHIELD® as a secondary method was chosen because of the project's resource constraints. Due to the compressed schedule and lack of shielding configuration data, simple FLUKA models were developed. FLUKA activity estimates for stainless steel were compared with sampling data to validate results, which show that simple FLUKA models and irradiation geometries can be used to predict radioactivity inventories accurately in exposed materials. During decommissioning 0.1% of the lead bricks were found to have measurable levels of induced radioactivity. Post-processing with MICROSHIELD® provides an acceptable secondary method of estimating residual exposure rates.
Estimating population trends with a linear model: Technical comments
Sauer, John R.; Link, William A.; Royle, J. Andrew
2004-01-01
Controversy has sometimes arisen over whether there is a need to accommodate the limitations of survey design in estimating population change from the count data collected in bird surveys. Analyses of surveys such as the North American Breeding Bird Survey (BBS) can be quite complex; it is natural to ask if the complexity is necessary, or whether the statisticians have run amok. Bart et al. (2003) propose a very simple analysis involving nothing more complicated than simple linear regression, and contrast their approach with model-based procedures. We review the assumptions implicit to their proposed method, and document that these assumptions are unlikely to be valid for surveys such as the BBS. One fundamental limitation of a purely design-based approach is the absence of controls for factors that influence detection of birds at survey sites. We show that failure to model observer effects in survey data leads to substantial bias in estimation of population trends from BBS data for the 20 species that Bart et al. (2003) used as the basis of their simulations. Finally, we note that the simulations presented in Bart et al. (2003) do not provide a useful evaluation of their proposed method, nor do they provide a valid comparison to the estimating- equations alternative they consider.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
1991-10-01
The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo
2010-02-01
In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.
Estimation of fish biomass using environmental DNA.
Takahara, Teruhiko; Minamoto, Toshifumi; Yamanaka, Hiroki; Doi, Hideyuki; Kawabata, Zen'ichiro
2012-01-01
Environmental DNA (eDNA) from aquatic vertebrates has recently been used to estimate the presence of a species. We hypothesized that fish release DNA into the water at a rate commensurate with their biomass. Thus, the concentration of eDNA of a target species may be used to estimate the species biomass. We developed an eDNA method to estimate the biomass of common carp (Cyprinus carpio L.) using laboratory and field experiments. In the aquarium, the concentration of eDNA changed initially, but reached an equilibrium after 6 days. Temperature had no effect on eDNA concentrations in aquaria. The concentration of eDNA was positively correlated with carp biomass in both aquaria and experimental ponds. We used this method to estimate the biomass and distribution of carp in a natural freshwater lagoon. We demonstrated that the distribution of carp eDNA concentration was explained by water temperature. Our results suggest that biomass data estimated from eDNA concentration reflects the potential distribution of common carp in the natural environment. Measuring eDNA concentration offers a non-invasive, simple, and rapid method for estimating biomass. This method could inform management plans for the conservation of ecosystems.
Estimation of Fish Biomass Using Environmental DNA
Takahara, Teruhiko; Minamoto, Toshifumi; Yamanaka, Hiroki; Doi, Hideyuki; Kawabata, Zen'ichiro
2012-01-01
Environmental DNA (eDNA) from aquatic vertebrates has recently been used to estimate the presence of a species. We hypothesized that fish release DNA into the water at a rate commensurate with their biomass. Thus, the concentration of eDNA of a target species may be used to estimate the species biomass. We developed an eDNA method to estimate the biomass of common carp (Cyprinus carpio L.) using laboratory and field experiments. In the aquarium, the concentration of eDNA changed initially, but reached an equilibrium after 6 days. Temperature had no effect on eDNA concentrations in aquaria. The concentration of eDNA was positively correlated with carp biomass in both aquaria and experimental ponds. We used this method to estimate the biomass and distribution of carp in a natural freshwater lagoon. We demonstrated that the distribution of carp eDNA concentration was explained by water temperature. Our results suggest that biomass data estimated from eDNA concentration reflects the potential distribution of common carp in the natural environment. Measuring eDNA concentration offers a non-invasive, simple, and rapid method for estimating biomass. This method could inform management plans for the conservation of ecosystems. PMID:22563411
Estimating survival rates with time series of standing age‐structure data
Udevitz, Mark S.; Gogan, Peter J.
2012-01-01
It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.
A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.
Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena
2012-08-01
A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.
X-ray peak profile analysis of zinc oxide nanoparticles formed by simple precipitation method
NASA Astrophysics Data System (ADS)
Pelicano, Christian Mark; Rapadas, Nick Joaquin; Magdaluyo, Eduardo
2017-12-01
Zinc oxide (ZnO) nanoparticles were successfully synthesized by a simple precipitation method using zinc acetate and tetramethylammonium hydroxide. The synthesized ZnO nanoparticles were characterized by X-ray Diffraction analysis (XRD) and Transmission Electron Microscopy (TEM). The XRD result revealed a hexagonal wurtzite structure for the ZnO nanoparticles. The TEM image showed spherical nanoparticles with an average crystallite size of 6.70 nm. For x-ray peak analysis, Williamson-Hall (W-H) and Size-Strain Plot (SSP) methods were applied to examine the effects of crystallite size and lattice strain on the peak broadening of the ZnO nanoparticles. Based on the calculations, the estimated crystallite sizes and lattice strains obtained are in good agreement with each other.
Energy Measurement Studies for CO2 Measurement with a Coherent Doppler Lidar System
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Vanvalkenburg, Randal L.; Yu, Jirong; Singh, Upendra N.; Kavaya, Michael J.
2010-01-01
The accurate measurement of energy in the application of lidar system for CO2 measurement is critical. Different techniques of energy estimation in the online and offline pulses are investigated for post processing of lidar returns. The cornerstone of the techniques is the accurate estimation of the spectrum of lidar signal and background noise. Since the background noise is not the ideal white Gaussian noise, simple average level estimation of noise level is not well fit in the energy estimation of lidar signal and noise. A brief review of the methods is presented in this paper.
Evaluation of Methods to Estimate the Surface Downwelling Longwave Flux during Arctic Winter
NASA Technical Reports Server (NTRS)
Chiacchio, Marc; Francis, Jennifer; Stackhouse, Paul, Jr.
2002-01-01
Surface longwave radiation fluxes dominate the energy budget of nighttime polar regions, yet little is known about the relative accuracy of existing satellite-based techniques to estimate this parameter. We compare eight methods to estimate the downwelling longwave radiation flux and to validate their performance with measurements from two field programs in thc Arctic: the Coordinated Eastern Arctic Experiment (CEAREX ) conducted in the Barents Sea during the autumn and winter of 1988, and the Lead Experiment performed in the Beaufort Sea in the spring of 1992. Five of the eight methods were developed for satellite-derived quantities, and three are simple parameterizations based on surface observations. All of the algorithms require information about cloud fraction, which is provided from the NASA-NOAA Television and Infrared Observation Satellite (TIROS) Operational Vertical Sounder (TOVS) polar pathfinder dataset (Path-P): some techniques ingest temperature and moisture profiles (also from Path-P): one-half of the methods assume that clouds are opaque and have a constant geometric thickness of 50 hPa, and three include no thickness information whatsoever. With a somewhat limited validation dataset, the following primary conclusions result: (1) all methods exhibit approximately the same correlations with measurements and rms differences, but the biases range from -34 W sq m (16% of the mean) to nearly 0; (2) the error analysis described here indicates that the assumption of a 50-hPa cloud thickness is too thin by a factor of 2 on average in polar nighttime conditions; (3) cloud-overlap techniques. which effectively increase mean cloud thickness, significantly improve the results; (4) simple Arctic-specific parameterizations performed poorly, probably because they were developed with surface-observed cloud fractions; and (5) the single algorithm that includes an estimate of cloud thickness exhibits the smallest differences from observations.
Optimized theory for simple and molecular fluids.
Marucho, M; Montgomery Pettitt, B
2007-03-28
An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli
A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less
Hallisey, Elaine; Tai, Eric; Berens, Andrew; Wilt, Grete; Peipins, Lucy; Lewis, Brian; Graham, Shannon; Flanagan, Barry; Lunsford, Natasha Buchanan
2017-08-07
Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates. The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland-Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates. This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.
[Evaluation of Organ Dose Estimation from Indices of CT Dose Using Dose Index Registry].
Iriuchijima, Akiko; Fukushima, Yasuhiro; Ogura, Akio
Direct measurement of each patient organ dose from computed tomography (CT) is not possible. Most methods to estimate patient organ dose is using Monte Carlo simulation with dedicated software. However, dedicated software is too expensive for small scale hospitals. Not every hospital can estimate organ dose with dedicated software. The purpose of this study was to evaluate the simple method of organ dose estimation using some common indices of CT dose. The Monte Carlo simulation software Radimetrics (Bayer) was used for calculating organ dose and analysis relationship between indices of CT dose and organ dose. Multidetector CT scanners were compared with those from two manufactures (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). Using stored patient data from Radimetrics, the relationships between indices of CT dose and organ dose were indicated as each formula for estimating organ dose. The accuracy of estimation method of organ dose was compared with the results of Monte Carlo simulation using the Bland-Altman plots. In the results, SSDE was the feasible index for estimation organ dose in almost organs because it reflected each patient size. The differences of organ dose between estimation and simulation were within 23%. In conclusion, our estimation method of organ dose using indices of CT dose is convenient for clinical with accuracy.
Calibrating the ECCO ocean general circulation model using Green's functions
NASA Technical Reports Server (NTRS)
Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.
2002-01-01
Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
A simple randomisation procedure for validating discriminant analysis: a methodological note.
Wastell, D G
1987-04-01
Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.
1983-05-01
occur. 4) It is also true that during a given time period, at a given base, not all of the people in the sample will actually be available for testing...taken sample sizes into consideration, we currently estimate that with few exceptions, we will have adequate samples to perform the analysis of simple ...aalanced Half Sample Repli- cations (BHSA). His analyses of simple cases have shown that this method is substantially more efficient than the
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
A Spatial Method to Calculate Small-Scale Fisheries Extent
NASA Astrophysics Data System (ADS)
Johnson, A. F.; Moreno-Báez, M.; Giron-Nava, A.; Corominas, J.; Erisman, B.; Ezcurra, E.; Aburto-Oropeza, O.
2016-02-01
Despite global catch per unit effort having redoubled since the 1950's, the global fishing fleet is estimated to be twice the size that the oceans can sustainably support. In order to gauge the collateral impacts of fishing intensity, we must be able to estimate the spatial extent and amount of fishing vessels in the oceans. Methods that do currently exist are built around electronic tracking and log book systems and generally focus on industrial fisheries. Spatial extent for small-scale fisheries therefore remains elusive for many small-scale fishing fleets; even though these fisheries land the same biomass for human consumption as industrial fisheries. Current methods are data-intensive and require extensive extrapolation when estimated across large spatial scales. We present an accessible, spatial method of calculating the extent of small-scale fisheries based on two simple measures that are available, or at least easily estimable, in even the most data poor fisheries: the number of boats and the local coastal human population. We demonstrate this method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This method provides an important first step towards estimating the fishing extent of the small-scale fleet, globally.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Frank; Pan, Kuang-Tse; Chu, Sung-Yu; Chan, Kun-Ming; Chou, Hong-Shiue; Wu, Ting-Jung; Lee, Wei-Chen
2011-04-01
An accurate preoperative estimate of the graft weight is vital to avoid small-for-size syndrome in the recipient and ensure donor safety after adult living donor liver transplantation (LDLT). Here we describe a simple method for estimating the graft volume (GV) that uses the maximal right portal vein diameter (RPVD) and the maximal left portal vein diameter (LPVD). Between June 2004 and December 2009, 175 consecutive donors undergoing right hepatectomy for LDLT were retrospectively reviewed. The GV was determined with 3 estimation methods: (1) the radiological graft volume (RGV) estimated by computed tomography (CT) volumetry; (2) the computed tomography-calculated graft volume (CGV-CT), which was obtained by the multiplication of the standard liver volume (SLV) by the RGV percentage with respect to the total liver volume derived from CT; and (3) the portal vein diameter ratio-calculated graft volume (CGV-PVDR), which was obtained by the multiplication of the SLV by the portal vein diameter ratio [PVDR; ie, PVDR = RPVD(2) /(RPVD(2) + LPVD(2) )]. These values were compared to the actual graft weight (AGW), which was measured intraoperatively. The mean AGW was 633.63 ± 107.51 g, whereas the mean RGV, CGV-CT, and CGV-PVDR values were 747.83 ± 138.59, 698.21 ± 94.81, and 685.20 ± 90.88 cm(3) , respectively. All 3 estimation methods tended to overestimate the AGW (P < 0.001). The actual graft-to-recipient body weight ratio (GRWR) was 1.00% ± 0.19%, and the GRWRs calculated on the basis of the RGV, CGV-CT, and CGV-PVDR values were 1.19% ± 0.25%, 1.11% ± 0.22%, and 1.09% ± 0.21%, respectively. Overall, the CGV-PVDR values better correlated with the AGW and GRWR values according to Lin's concordance correlation coefficient and the Landis and Kock benchmark. In conclusion, the PVDR method is a simple estimation method that accurately predicts GVs and GRWRs in adult LDLT. Copyright © 2011 American Association for the Study of Liver Diseases.
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
NASA Astrophysics Data System (ADS)
Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao
2018-01-01
In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.
Chen, Shuo; Ong, Yi Hong; Lin, Xiaoqian; Liu, Quan
2015-01-01
Raman spectroscopy has shown great potential in biomedical applications. However, intrinsically weak Raman signals cause slow data acquisition especially in Raman imaging. This problem can be overcome by narrow-band Raman imaging followed by spectral reconstruction. Our previous study has shown that Raman spectra free of fluorescence background can be reconstructed from narrow-band Raman measurements using traditional Wiener estimation. However, fluorescence-free Raman spectra are only available from those sophisticated Raman setups capable of fluorescence suppression. The reconstruction of Raman spectra with fluorescence background from narrow-band measurements is much more challenging due to the significant variation in fluorescence background. In this study, two advanced Wiener estimation methods, i.e. modified Wiener estimation and sequential weighted Wiener estimation, were optimized to achieve this goal. Both spontaneous Raman spectra and surface enhanced Raman spectra were evaluated. Compared with traditional Wiener estimation, two advanced methods showed significant improvement in the reconstruction of spontaneous Raman spectra. However, traditional Wiener estimation can work as effectively as the advanced methods for SERS spectra but much faster. The wise selection of these methods would enable accurate Raman reconstruction in a simple Raman setup without the function of fluorescence suppression for fast Raman imaging. PMID:26203387
A simple method for estimating potential relative radiation (PRR) for landscape-vegetation analysis.
Kenneth B. Jr. Pierce; Todd Lookingbill; Dean Urban
2005-01-01
Radiation is one of the primary influences on vegetation composition and spatial pattern. Topographic orientation is often used as a proxy for relative radiation load due to its effects on evaporative demand and local temperature. Common methods for incorporating this information (i.e., site measures of slope and aspect) fail to include daily or annual changes in solar...
Postmodeling Sensitivity Analysis to Detect the Effect of Missing Data Mechanisms
ERIC Educational Resources Information Center
Jamshidian, Mortaza; Mata, Matthew
2008-01-01
Incomplete or missing data is a common problem in almost all areas of empirical research. It is well known that simple and ad hoc methods such as complete case analysis or mean imputation can lead to biased and/or inefficient estimates. The method of maximum likelihood works well; however, when the missing data mechanism is not one of missing…
Polarimetric, Two-Color, Photon-Counting Laser Altimeter Measurements of Forest Canopy Structure
NASA Technical Reports Server (NTRS)
Harding, David J.; Dabney, Philip W.; Valett, Susan
2011-01-01
Laser altimeter measurements of forest stands with distinct structures and compositions have been acquired at 532 nm (green) and 1064 nm (near-infrared) wavelengths and parallel and perpendicular polarization states using the Slope Imaging Multi-polarization Photon Counting Lidar (SIMPL). The micropulse, single photon ranging measurement approach employed by SIMPL provides canopy structure measurements with high vertical and spatial resolution. Using a height distribution analysis method adapted from conventional, 1064 nm, full-waveform lidar remote sensing, the sensitivity of two parameters commonly used for above-ground biomass estimation are compared as a function of wavelength. The results for the height of median energy (HOME) and canopy cover are for the most part very similar, indicating biomass estimations using lidars operating at green and near-infrared wavelengths will yield comparable estimates. The expected detection of increasing depolarization with depth into the canopies due to volume multiple-scattering was not observed, possibly due to the small laser footprint and the small detector field of view used in the SIMPL instrument. The results of this work provide pathfinder information for NASA's ICESat-2 mission that will employ a 532 nm, micropulse, photon counting laser altimeter.
Range estimation of passive infrared targets through the atmosphere
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan; Seo, Doochun; Choi, Seokweon
2013-04-01
Target range estimation is traditionally based on radar and active sonar systems in modern combat systems. However, jamming signals tremendously degrade the performance of such active sensor devices. We introduce a simple target range estimation method and the fundamental limits of the proposed method based on the atmosphere propagation model. Since passive infrared (IR) sensors measure IR signals radiating from objects in different wavelengths, this method has robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and various attenuation factors (i.e., the distance between sensor and target and atmosphere environment parameters). MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the results from MODTRAN and atmosphere propagation-based modeling, the target range can be estimated. To analyze the proposed method's performance statistically, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao lower bound (CRLB) via the probability density function of measured radiance. We also compare CRLB and the variance of MLE using Monte-Carlo simulation.
May, S L; May, W A; Bourdoux, P P; Pino, S; Sullivan, K M; Maberly, G F
1997-05-01
The measurement of urinary iodine in population-based surveys provides a biological indicator of the severity of iodine-deficiency disorders. We describe the steps performed to validate a simple, inexpensive, manual urinary iodine acid digestion method, and compare the results using this method with those of other urinary iodine methods. Initially, basic performance characteristics were evaluated: the average recovery of added iodine was 100.4 +/- 8.7% (mean +/- SD), within-assay precision (CV) over the assay range 0-0.95 mumol/L (0-12 micrograms/dL) was < 6%, between-assay precision over the same range was < 12%, and assay sensitivity was 0.05 mumol/L (0.6 microgram/dL). There were no apparent effects on the method by thiocyanate, a known interfering substance. In a comparison with five other methods performed in four different laboratories, samples were collected to test the method performance over a wide range of urinary iodine values (0.04-3.7 mumol/L, or 0.5-47 micrograms/dL). There was a high correlation between all methods and the interpretation of the results was consistent. We conclude that the simple, manual acid digestion method is suitable for urinary iodine analysis.
Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments
NASA Astrophysics Data System (ADS)
Berk, Mario; Å pačková, Olga; Straub, Daniel
2017-12-01
The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.
Naimi, Ashley I; Cole, Stephen R; Kennedy, Edward H
2017-04-01
Robins' generalized methods (g methods) provide consistent estimates of contrasts (e.g. differences, ratios) of potential outcomes under a less restrictive set of identification conditions than do standard regression methods (e.g. linear, logistic, Cox regression). Uptake of g methods by epidemiologists has been hampered by limitations in understanding both conceptual and technical details. We present a simple worked example that illustrates basic concepts, while minimizing technical complications. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.
1991-01-01
Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less
Wall proximity corrections for hot-wire readings in turbulent flows
NASA Technical Reports Server (NTRS)
Hebbar, K. S.
1980-01-01
This note describes some details of recent (successful) attempts of wall proximity corrections for hot-wire measurements performed in a three-dimensional incompressible turbulent boundary layer. A simple and quite satisfactory method of estimating wall proximity effects on hot-wire readings is suggested.
A New Method of Estimating Atomic Charges by Electronegativity Equilibration.
ERIC Educational Resources Information Center
Smith, Derek W.
1990-01-01
Presented is a modification of Bratsch's prescription so that difficult problems are obviated while simple, familiar Pauling electronegativities are retained. Discussed are the equilibration of electronegativity in molecules, group electronegativities, coordination compounds and molecules with dative bonds, ions, and correlation with core…
Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.
Estevis, Eduardo; Basso, Michael R; Combs, Dennis
2012-01-01
A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.
Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben
2018-02-22
This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Ezoe, Satoshi; Morooka, Takeo; Noda, Tatsuya; Sabin, Miriam Lewis; Koike, Soichi
2012-01-01
Men who have sex with men (MSM) are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel) they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K
2017-01-01
Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.
Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun
2018-06-04
Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.
Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J
2018-06-01
Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.
NASA Astrophysics Data System (ADS)
Kristyán, Sándor
1997-11-01
In the author's previous work (Chem. Phys. Lett. 247 (1995) 101 and Chem. Phys. Lett. 256 (1996) 229) a simple quasi-linear relationship was introduced between the number of electrons, N, participating in any molecular system and the correlation energy: -0.035 ( N - 1) > Ecorr[hartree] > - 0.045( N -1). This relationship was developed to estimate more accurately correlation energy immediately in ab initio calculations by using the partial charges of atoms in the molecule, easily obtained after Hartree-Fock self-consistent field (HF-SCF) calculations. The method is compared to the well-known B3LYP, MP2, CCSD and G2M methods. Correlation energy estimations for negatively (-1) charged atomic ions are also reported.
The 'double headed slug flap': a simple technique to reconstruct large helical rim defects.
Masud, D; Tzafetta, K
2012-10-01
Reconstructing partial defects of the ear can be challenging, balancing the creation of the details of the ear with scarring, morbidity and number of surgical stages. Common causes of ear defects are human bites, tumour excision and burn injuries. Reconstructing defects of the ear with tube pedicled flaps and other local flaps requires an accurate measurement of size of the defect with little room for error, particularly under estimation. We present a simple method of reconstruction for partial defects of the ear using a two-stage technique with post auricular transposition flaps. This allows for under or over estimation of size defects permitting accurate tissue usage giving good aesthetic outcomes. Copyright © 2012 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Life extending control: An interdisciplinary engineering thrust
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.
A simple model to estimate the impact of sea-level rise on platform beaches
NASA Astrophysics Data System (ADS)
Taborda, Rui; Ribeiro, Mónica Afonso
2015-04-01
Estimates of future beach evolution in response to sea-level rise are needed to assess coastal vulnerability. A research gap is identified in providing adequate predictive methods to use for platform beaches. This work describes a simple model to evaluate the effects of sea-level rise on platform beaches that relies on the conservation of beach sand volume and assumes an invariant beach profile shape. In closed systems, when compared with the Inundation Model, results show larger retreats; the differences are higher for beaches with wide berms and when the shore platform develops at shallow depths. The application of the proposed model to Cascais (Portugal) beaches, using 21st century sea-level rise scenarios, shows that there will be a significant reduction in beach width.
Data-driven outbreak forecasting with a simple nonlinear growth model
Lega, Joceline; Brown, Heidi E.
2016-01-01
Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. PMID:27770752
Southern Forestry Smoke Management Guidebook
Hugh E. Mobley; [senior compiler
1976-01-01
A system for predicting and modifying smoke concentrations from prescription fires is introduced. While limited to particulate matter and the more typical southern fuels, the system is for both simple and complex applications. Forestrysmoke constituents, variables affecting smoke production and dispersion, and new methods for estimating available fuel are presented....
AN ENVIRONMENTAL SIMULATION MODEL FOR TRANSPORT AND FATE OF MERCURY IN SMALL RURAL CATCHMENTS
The development of an extensively modified version of the environmental model GLEAMS to simulate fate and transport of mercury in small catchments is presented. Methods for parameter estimation are proposed and in some cases simple relationships for mercury processes are derived....
ENVIRONMENTAL AUDITING: AN INTEGRATED ENVIRONMENTAL ASSESSMENT OF THE U.S. MID-ATLANTIC REGION
Many of today's environmental problems are regional in scope and their effects overlap and interact. We developed a simple method to provide an integrated assessment of environmental conditions and estimate cumulative impacts across a large region, by combining data on land-cove...
INFLUENCE OF PEAT ON FENTON OXIDATION
A diagnostic probe was used to estimate the activity of Fenton-derived hydroxyl radicals (@OH), reaction kinetics, and oxidation efficiency in batch suspensions comprised of silica sand, crushed goethite ("-FeOOH) ore, peat, and H2O2 (0.13 mM). A simple method of kinetic analysi...
Gjerde, Hallvard; Verstraete, Alain
2010-02-25
To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.
Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.
Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi
2015-09-01
A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.
Analysis options for estimating status and trends in long-term monitoring
Bart, Jonathan; Beyer, Hawthorne L.
2012-01-01
This chapter describes methods for estimating long-term trends in ecological parameters. Other chapters in this volume discuss more advanced methods for analyzing monitoring data, but these methods may be relatively inaccessible to some readers. Therefore, this chapter provides an introduction to trend analysis for managers and biologists while also discussing general issues relevant to trend assessment in any long-term monitoring program. For simplicity, we focus on temporal trends in population size across years. We refer to the survey results for each year as the “annual means” (e.g. mean per transect, per plot, per time period). The methods apply with little or no modification, however, to formal estimates of population size, other temporal units (e.g. a month), to spatial or other dimensions such as elevation or a north–south gradient, and to other quantities such as chemical or geological parameters. The chapter primarily discusses methods for estimating population-wide parameters rather than studying variation in trend within the population, which can be examined using methods presented in other chapters (e.g. Chapters 7, 12, 20). We begin by reviewing key concepts related to trend analysis. We then describe how to evaluate potential bias in trend estimates. An overview of the statistical models used to quantify trends is then presented. We conclude by showing ways to estimate trends using simple methods that can be implemented with spreadsheets.
ERIC Educational Resources Information Center
Feldt, Leonard S.
2011-01-01
This article presents a simple, computer-assisted method of determining the extent to which increases in reliability increase the power of the "F" test of equality of means. The method uses a derived formula that relates the changes in the reliability coefficient to changes in the noncentrality of the relevant "F" distribution. A readily available…
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
Berger, Lawrence M; Bruch, Sarah K; Johnson, Elizabeth I; James, Sigrid; Rubin, David
2009-01-01
This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized change, simple change, difference-in-difference, and fixed effects models. Models were estimated using the full sample and a matched sample generated by propensity scoring. Although results from the unmatched OLS and residualized change models suggested that out-of-home placement is associated with increased child behavior problems, estimates from models that more rigorously adjust for selection bias indicated that placement has little effect on children's cognitive skills or behavior problems.
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
Beamer, P.I.; Sugeng, A. J.; Kelly, M.D.; Lothrop, N.; Klimecki, W.; Wilkinson, S.T.; Loh, M.
2014-01-01
Mine tailings are a source of metal exposures in many rural communities. Multiple air samples are necessary to assess the extent of exposures and factors contributing to these exposures. However, air sampling equipment is costly and requires trained personnel to obtain measurements, limiting the number of samples that can be collected. Simple, low-cost methods are needed to allow for increased sample collection. The objective of our study was to assess if dust fall filters can serve as passive air samplers and be used to characterize potential exposures in a community near contaminated mine tailings. We placed filters in cylinders, concurrently with active indoor air samplers, in 10 occupied homes. We calculated an estimated flow rate by dividing the mass on each dust fall filter by the bulk air concentration and the sampling duration. The mean estimated flow rate for dust fall filters was significantly different during sampling periods with precipitation. The estimated flow rate was used to estimate metal concentration in the air of these homes, as well as in 31 additional homes in another rural community impacted by contaminated mine tailings. The estimated air concentrations had a significant linear association with the measured air concentrations for beryllium, manganese and arsenic (p<0.05), whose primary source in indoor air is resuspended soil from outdoors. In the second rural community, our estimated metal concentrations in air were comparable to active air sampling measurements taken previously. This passive air sampler is a simple low-cost method to assess potential exposures near contaminated mining sites. PMID:24469149
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Traino, A. C.; Xhafa, B.; Sezione di Fisica Medica, U.O. Fisica Sanitaria, Azienda Ospedaliero-Universitaria Pisana, via Roma n. 67, Pisa 56125
2009-04-15
One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal {sup 131}I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of {sup 131}I in the thyroid gland are presented and discussed. Themore » first is based on two measurements 4 and 24 h after a diagnostic {sup 131}I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete {sup 131}I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic {sup 131}I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after {sup 131}I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.« less
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
The use of subjective rating of exertion in Ergonomics.
Capodaglio, P
2002-01-01
In Ergonomics, the use of psychophysical methods for subjectively evaluating work tasks and determining acceptable loads has become more common. Daily activities at the work site are studied not only with physiological methods but also with perceptual estimation and production methods. The psychophysical methods are of special interest in field studies of short-term work tasks for which valid physiological measurements are difficult to obtain. The perceived exertion, difficulty and fatigue that a person experiences in a certain work situation is an important sign of a real or objective load. Measurement of the physical load with physiological parameters is not sufficient since it does not take into consideration the particular difficulty of the performance or the capacity of the individual. It is often difficult from technical and biomechanical analyses to understand the seriousness of a difficulty that a person experiences. Physiological determinations give important information, but they may be insufficient due to the technical problems in obtaining relevant but simple measurements for short-term activities or activities involving special movement patterns. Perceptual estimations using Borg's scales give important information because the severity of a task's difficulty depends on the individual doing the work. Observation is the most simple and used means to assess job demands. Other evaluations integrating observation are the followings: indirect estimation of energy expenditure based on prediction equations or direct measurement of oxygen consumption; measurements of forces, angles and biomechanical parameters; measurements of physiological and neurophysiological parameters during tasks. It is recommended that determinations of performances of occupational activities assess rating of perceived exertion and integrate these measurements of intensity levels with those of activity's type, duration and frequency. A better estimate of the degree of physical activity of individuals thus can be obtained.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Dudley, Kenneth
2003-01-01
A simple method is presented to estimate the complex dielectric constants of individual layers of a multilayer composite material. Using the MatLab Optimization Tools simple MatLab scripts are written to search for electric properties of individual layers so as to match the measured and calculated S-parameters. A single layer composite material formed by using materials such as Bakelite, Nomex Felt, Fiber Glass, Woven Composite B and G, Nano Material #0, Cork, Garlock, of different thicknesses are tested using the present approach. Assuming the thicknesses of samples unknown, the present approach is shown to work well in estimating the dielectric constants and the thicknesses. A number of two layer composite materials formed by various combinations of above individual materials are tested using the present approach. However, the present approach could not provide estimate values close to their true values when the thicknesses of individual layers were assumed to be unknown. This is attributed to the difficulty in modelling the presence of airgaps between the layers while doing the measurement of S-parameters. A few examples of three layer composites are also presented.
Dynamic properties of composite cemented clay.
Cai, Yuan-Qiang; Liang, Xu
2004-03-01
In this work, the dynamic properties of composite cemented clay under a wide range of strains were studied considering the effect of different mixing ratio and the change of confining pressures through dynamic triaxial test. A simple and practical method to estimate the dynamic elastic modulus and damping ratio is proposed in this paper and a related empirical normalized formula is also presented. The results provide useful guidelines for preliminary estimation of cement requirements to improve the dynamic properties of clays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Huafeng; Colabello, Diane M.; Sklute, Elizabeth C.
The absolute absorption coefficient, α(E), is a critical design parameter for devices using semiconductors for light harvesting associated with renewable energy production, both for classic technologies such as photovoltaics and for emerging technologies such as direct solar fuel production. While α(E) is well-known for many classic simple semiconductors used in photovoltaic applications, the absolute values of α(E) are typically unknown for the complex semiconductors being explored for solar fuel production due to the absence of single crystals or crystalline epitaxial films that are needed for conventional methods of determining α(E). In this work, a simple self-referenced method for estimating bothmore » the refractive indices, n(E), and absolute absorption coefficients, α(E), for loose powder samples using diffuse reflectance data is demonstrated. In this method, the sample refractive index can be deduced by refining n to maximize the agreement between the relative absorption spectrum calculated from bidirectional reflectance data (calculated through a Hapke transform which depends on n) and integrating sphere diffuse reflectance data (calculated through a Kubleka–Munk transform which does not depend on n). This new method can be quickly used to screen the suitability of emerging semiconductor systems for light-harvesting applications. The effectiveness of this approach is tested using the simple classic semiconductors Ge and Fe 2O 3 as well as the complex semiconductors La 2MoO 5 and La 4Mo 2O 11. The method is shown to work well for powders with a narrow size distribution (exemplified by Fe 2O 3) and to be ineffective for semiconductors with a broad size distribution (exemplified by Ge). As such, it provides a means for rapidly estimating the absolute optical properties of complex solids which are only available as loose powders.« less
NASA Astrophysics Data System (ADS)
Tripathy, Madhumita; Raman, Mini; Chauhan, Prakash
2015-10-01
Photosynthetically available radiation (PAR) is an important variable for radiation budget, marine and terrestrial ecosystem models. OCEANSAT-1 Ocean Color Monitor (OCM) PAR was estimated using two different methods under both clear and cloudy sky conditions. In the first approach, aerosol optical depth (AOD) and cloud optical depth (COD) were estimated from OCEANSAT-1 OCM TOA (top-of-atmosphere) radiance data on a pixel by pixel basis and PAR was estimated from extraterrestrial solar flux for fifteen spectral bands using a radiative transfer model. The second approach used TOA radiances measured by OCM in the PAR spectral range to compute PAR. This approach also included surface albedo and cloud albedo as inputs. Comparison between OCEANSAT-1 OCM PAR at noon with in situ measured PAR shows that root mean square difference was 5.82% for the method I and 7.24% for the method II in daily time scales. Results indicate that methodology adopted to estimate PAR from OCEANSAT-1 OCM can produce reasonably accurate PAR estimates over the tropical Indian Ocean region. This approach can be extended to OCEANSAT-2 OCM and future OCEANSAT-3 OCM data for operational estimation of PAR for regional marine ecosystem applications.
Performance of Trajectory Models with Wind Uncertainty
NASA Technical Reports Server (NTRS)
Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.
2009-01-01
Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.
Impact of the Fano Factor on Position and Energy Estimation in Scintillation Detectors.
Bora, Vaibhav; Barrett, Harrison H; Jha, Abhinav K; Clarkson, Eric
2015-02-01
The Fano factor for an integer-valued random variable is defined as the ratio of its variance to its mean. Light from various scintillation crystals have been reported to have Fano factors from sub-Poisson (Fano factor < 1) to super-Poisson (Fano factor > 1). For a given mean, a smaller Fano factor implies a smaller variance and thus less noise. We investigated if lower noise in the scintillation light will result in better spatial and energy resolutions. The impact of Fano factor on the estimation of position of interaction and energy deposited in simple gamma-camera geometries is estimated by two methods - calculating the Cramér-Rao bound and estimating the variance of a maximum likelihood estimator. The methods are consistent with each other and indicate that when estimating the position of interaction and energy deposited by a gamma-ray photon, the Fano factor of a scintillator does not affect the spatial resolution. A smaller Fano factor results in a better energy resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, K; Wong, M; Ng, Y
Purpose: Interventional cardiac procedures utilize frequent fluoroscopy and cineangiography, which impose considerable radiation risk to patients, especially pediatric patients. Accurate calculation of effective dose is important in order to estimate cancer risk over the rest of their lifetime. This study evaluates the difference in effective dose calculated by Monte Carlo simulation with those estimated by locally-derived conversion factors (CF-local) and by commonly quoted conversion factors from Karambatsakidou et al (CF-K). Methods: Effective dose (E),of 12 pediatric patients, age between 2.5–19 years old, who had undergone interventional cardiac procedures, were calculated using PCXMC-2.0 software. Tube spectrum, irradiation geometry, exposure parameters andmore » dose-area product (DAP) of each projection were included in the software calculation. Effective doses for each patient were also estimated by two Methods: 1) CF-local: conversion factor derived locally by generalizing results of 12 patients, multiplied by DAP of each patient gives E-local. 2) CF-K: selected factor from above-mentioned literature, multiplied by DAP of each patient gives E-K. Results: Mean of E, E-local and E-K were 16.01 mSv, 16.80 mSv and 22.25 mSv respectively. A deviation of −29.35% to +34.85% between E and E-local, while a greater deviation of −28.96% to +60.86% between E and EK were observed. E-K overestimated the effective dose for patients at age 7.5–19. Conclusion: Effective dose obtained by conversion factors is simple and quick to estimate radiation risk of pediatric patients. This study showed that estimation by CF-local may bear an error of 35% when compared with Monte Carlo calculation. If using conversion factors derived by other studies may result in an even greater error, of up to 60%, due to factors that are not catered for in the estimation, including patient size, projection angles, exposure parameters, tube filtration, etc. Users must be aware of these potential inaccuracies when simple conversion method is employed.« less
Young-Person's Guide to Detached-Eddy Simulation Grids
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.; Streett, Craig (Technical Monitor)
2001-01-01
We give the "philosophy", fairly complete instructions, a sketch and examples of creating Detached-Eddy Simulation (DES) grids from simple to elaborate, with a priority on external flows. Although DES is not a zonal method, flow regions with widely different gridding requirements emerge, and should be accommodated as far as possible if a good use of grid points is to be made. This is not unique to DES. We brush on the time-step choice, on simple pitfalls, and on tools to estimate whether a simulation is well resolved.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Random Item Generation Is Affected by Age
ERIC Educational Resources Information Center
Multani, Namita; Rudzicz, Frank; Wong, Wing Yiu Stephanie; Namasivayam, Aravind Kumar; van Lieshout, Pascal
2016-01-01
Purpose: Random item generation (RIG) involves central executive functioning. Measuring aspects of random sequences can therefore provide a simple method to complement other tools for cognitive assessment. We examine the extent to which RIG relates to specific measures of cognitive function, and whether those measures can be estimated using RIG…
USDA-ARS?s Scientific Manuscript database
Current methods for generating malting quality metrics have been developed largely to support commercial malting and brewing operations, providing accurate, reproducible analytical data to guide malting and brewing production. Infrastructure to support these analytical operations often involves sub...
Childhood and Adolescent Anxiety and Depression: Beyond Heritability
ERIC Educational Resources Information Center
Franic, Sanja; Middeldorp, Christel M.; Dolan, Conor V.; Ligthart, Lannie; Boomsma, Dorret I.
2010-01-01
Objective: To review the methodology of behavior genetics studies addressing research questions that go beyond simple heritability estimation and illustrate these using representative research on childhood and adolescent anxiety and depression. Method: The classic twin design and its extensions may be used to examine age and gender differences in…
Confidence bands for measured economically optimal nitrogen rates
USDA-ARS?s Scientific Manuscript database
While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...
A simple mathematical method to estimate ammonia emission from in-house windrowing of poultry litter
USDA-ARS?s Scientific Manuscript database
In house windrowing between flocks is an emerging sanitary management practice to partially disinfect the built-up litter in broiler houses. Windrowing litter results in high litter temperatures that can reduce the risk of transmitting pathogens to next flock. Simultaneously, this practice may also ...
Causes of Effects and Effects of Causes
ERIC Educational Resources Information Center
Pearl, Judea
2015-01-01
This article summarizes a conceptual framework and simple mathematical methods of estimating the probability that one event was a necessary cause of another, as interpreted by lawmakers. We show that the fusion of observational and experimental data can yield informative bounds that, under certain circumstances, meet legal criteria of causation.…
The Variance Normalization Method of Ridge Regression Analysis.
ERIC Educational Resources Information Center
Bulcock, J. W.; And Others
The testing of contemporary sociological theory often calls for the application of structural-equation models to data which are inherently collinear. It is shown that simple ridge regression, which is commonly used for controlling the instability of ordinary least squares regression estimates in ill-conditioned data sets, is not a legitimate…
Using a Simple Optical Rangefinder To Teach Similar Triangles.
ERIC Educational Resources Information Center
Cuicchi, Paul M.; Hutchison, Paul S.
2003-01-01
Describes how the concept of similar triangles was taught using an optical method of estimating large distances as a corresponding activity. Includes the derivation of a formula to calculate one source of measurement error and is a nice exercise in the use of the properties of similar triangles. (Author/NB)
Application of Transformations in Parametric Inference
ERIC Educational Resources Information Center
Brownstein, Naomi; Pensky, Marianna
2008-01-01
The objective of the present paper is to provide a simple approach to statistical inference using the method of transformations of variables. We demonstrate performance of this powerful tool on examples of constructions of various estimation procedures, hypothesis testing, Bayes analysis and statistical inference for the stress-strength systems.…
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert
2016-01-01
A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
On impedance measurement of reinforced concrete on the surface for estimate of corroded rebar
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Yu, Jun; Harada, Yoshihisa; Iwata, Masahiro; Noguchi, Kazuhiro
2017-04-01
In an estimate of health monitoring for reinforced concrete, corrosion degree of rebar is important parameter but is not easy to be estimated by non destructive testing. A few test method such as half cell method or polarization resistance method could be a 'perfect' nondestructive method if luckily having had wired connection to rebar without destructing target concrete. In this presentation it is reported the experimental result that an impedance measurement on surface of reinforced concretes is able to distinguish corroded rebar from healthy rebar. The contact electrode on concrete surface are simple structure made of urethane sponge and needle. Impedance measurement are carried out with frequency response analyzer with frequency range from 0.01Hz to 1MHz, typical amplitude of imposed voltage are 10 volt. We made concrete specimens under two different corrosion process. One process(pre corrosion) has rebars corroded by electrolysis in salty water before concrete casting and another process (post corrosion) has concrete specimens being corroded during the curing. The results of application of developed method to these corroded specimens show the method is useful to estimate corrosion level of rebars.
Craig, Eva; Reilly, John; Bland, Ruth
2013-11-01
A variety of methods are available for defining undernutrition (thinness/underweight/under-fat) and overnutrition (overweight/obesity/over-fat). The extent to which these definitions agree is unclear. The present cross-sectional study aimed to assess agreement between widely used methods of assessing nutritional status in children and adolescents, and to examine the benefit of body composition estimates. The main objective of the cross-sectional study was to assess underweight, overweight and obesity using four methods: (i) BMI-for-age using WHO (2007) reference data; (ii) BMI-for-age using Cole et al. and International Obesity Taskforce cut-offs; (iii) weight-for-age using the National Centre for Health Statistics/WHO growth reference 1977; and (iv) body fat percentage estimated by bio-impedance (body fat reference curves for children of McCarthy et al., 2006). Comparisons were made between methods using weighted kappa analyses. Rural South Africa. Individuals (n 1519) in three age groups (school grade 1, mean age 7 years; grade 5, mean age 11 years; grade 9, mean age 15 years). In boys, prevalence of unhealthy weight status (both under- and overnutrition) was much higher at all ages with body fatness measures than with simple anthropometric proxies for body fatness; agreement between fatness and weight-based measures was fair or slight using Landis and Koch categories. In girls, prevalence of unhealthy weight status was also higher with body fatness than with proxies, although agreement between measures ranged from fair to substantial. Methods for defining under- and overnutrition should not be considered equivalent. Weight-based measures provide highly conservative estimates of unhealthy weight status, possibly more conservative in boys. Simple body composition measures may be more informative than anthropometry for nutritional surveillance of children and adolescents.
A NEW METHOD FOR ENVIRONMENTAL FLOW ASSESSMENT BASED ON BASIN GEOLOGY. APPLICATION TO EBRO BASIN.
2018-02-01
The determination of environmental flows is one of the commonest practical actions implemented on European rivers to promote their good ecological status. In Mediterranean rivers, groundwater inflows are a decisive factor in streamflow maintenance. This work examines the relationship between the lithological composition of the Ebro basin (Spain) and dry season flows in order to establish a model that can assist in the calculation of environmental flow rates.Due to the lack of information on the hydrogeological characteristics of the studied basin, the variable representing groundwater inflows has been estimated in a very simple way. The explanatory variable used in the proposed model is easy to calculate and is sufficiently powerful to take into account all the required characteristics.The model has a high coefficient of determination, indicating that it is accurate for the intended purpose. The advantage of this method compared to other methods is that it requires very little data and provides a simple estimate of environmental flow. It is also independent of the basin area and the river section order.The results of this research also contribute to knowledge of the variables that influence low flow periods and low flow rates on rivers in the Ebro basin.
Non-invasive estimation of dissipation from non-equilibrium fluctuations in chemical reactions.
Muy, S; Kundu, A; Lacoste, D
2013-09-28
We show how to extract an estimate of the entropy production from a sufficiently long time series of stationary fluctuations of chemical reactions. This method, which is based on recent work on fluctuation theorems, is direct, non-invasive, does not require any knowledge about the underlying dynamics and is applicable even when only partial information is available. We apply it to simple stochastic models of chemical reactions involving a finite number of states, and for this case, we study how the estimate of dissipation is affected by the degree of coarse-graining present in the input data.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
NASA Astrophysics Data System (ADS)
Vespe, Francesco; Benedetto, Catia
2013-04-01
The huge amount of GPS Radio Occultation (RO) observations currently available thanks to space mission like COSMIC, CHAMP, GRACE, TERRASAR-X etc., have greatly encouraged the research of new algorithms suitable to extract humidity, temperature and pressure profiles of the atmosphere in a more and more precise way. For what concern the humidity profiles in these last years two different approaches have been widely proved and applied: the "Simple" and the 1DVAR methods. The Simple methods essentially determine dry refractivity profiles from temperature analysis profiles and hydrostatic equation. Then the dry refractivity is subtracted from RO refractivity to achieve the wet component. Finally from the wet refractivity is achieved humidity. The 1DVAR approach combines RO observations with profiles given by the background models with both the terms weighted with the inverse of covariance matrix. The advantage of "Simple" methods is that they are not affected by bias due to the background models. We have proposed in the past the BPV approach to retrieve humidity. Our approach can be classified among the "Simple" methods. The BPV approach works with dry atmospheric CIRA-Q models which depend on latitude, DoY and height. The dry CIRA-Q refractivity profile is selected estimating the involved parameters in a non linear least square fashion achieved by fitting RO observed bending angles through the stratosphere. The BPV as well as all the other "Simple" methods, has as drawback the unphysical occurrence of negative "humidity". Thus we propose to apply a modulated weighting of the fit residuals just to minimize the effects of this inconvenient. After a proper tuning of the approach, we plan to present the results of the validation.
Determination of traces of cobalt in soils: A field method
Almond, H.
1953-01-01
The growing use of geochemical prospecting methods in the search for ore deposits has led to the development of a field method for the determination of cobalt in soils. The determination is based on the fact that cobalt reacts with 2-nitroso-1-naphthol to yield a pink compound that is soluble in carbon tetrachloride. The carbon tetrachloride extract is shaken with dilute cyanide to complex interfering elements and to remove excess reagent. The cobalt content is estimated by comparing the pink color in the carbon tetrachloride with a standard series prepared from standard solutions. The cobalt 2-nitroso-1-naphtholate system in carbon tetrachloride follows Beer's law. As little as 1 p.p.m. can be determined in a 0.1-gram sample. The method is simple and fast and requires only simple equipment. More than 40 samples can be analyzed per man-day with an accuracy within 30% or better.
A new multistage groundwater transport inverse method: presentation, evaluation, and implications
Anderman, Evan R.; Hill, Mary C.
1999-01-01
More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
A Simple Visual Estimation of Food Consumption in Carnivores
Potgieter, Katherine R.; Davies-Mostert, Harriet T.
2012-01-01
Belly-size ratings or belly scores are frequently used in carnivore research as a method of rating whether and how much an animal has eaten. This method provides only a rough ordinal measure of fullness and does not quantify the amount of food an animal has consumed. Here we present a method for estimating the amount of meat consumed by individual African wild dogs Lycaon pictus. We fed 0.5 kg pieces of meat to wild dogs being temporarily held in enclosures and measured the corresponding change in belly size using lateral side photographs taken perpendicular to the animal. The ratio of belly depth to body length was positively related to the mass of meat consumed and provided a useful estimate of the consumption. Similar relationships could be calculated to determine amounts consumed by other carnivores, thus providing a useful tool in the study of feeding behaviour. PMID:22567086
Estimating population trends with a linear model
Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
Actual and estimated costs of disposable materials used during surgical procedures.
Toyabe, Shin-Ichi; Cao, Pengyu; Kurashima, Sachiko; Nakayama, Yukiko; Ishii, Yuko; Hosoyama, Noriko; Akazawa, Kouhei
2005-07-01
It is difficult to estimate precisely the costs of disposable materials used during surgical operations. To evaluate the actual costs of disposable materials, we calculated the actual costs of disposable materials used in 59 operations by taking account of costs of all disposable materials used for each operation. The costs of the disposable materials varied significantly from operation to operation (US$ 38-4230 per operation), and the median [25-percentile and 75-percentile] of the sum total of disposable material costs of a single operation was found to be US$ 686 [205 and 993]. Multiple regression analysis with a stepwise regression method showed that costs of disposable materials significantly correlated only with operation time (p<0.001). Based on the results, we propose a simple method for estimating costs of disposable materials by measuring operation time, and we found that the method gives reliable results. Since costs of disposable materials used during surgical operations are considerable, precise estimation of the costs is essential for hospital cost accounting. Our method should be useful for planning hospital administration strategies.
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
NASA Astrophysics Data System (ADS)
Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi
Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.
A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.; Irvine, Tom
2013-01-01
A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.
Daniell method for power spectral density estimation in atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labuda, Aleksander
An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less
Long, Keith R.; Singer, Donald A.
2001-01-01
Determining the economic viability of mineral deposits of various sizes and grades is a critical task in all phases of mineral supply, from land-use management to mine development. This study evaluates two simple tools for estimating the economic viability of porphyry copper deposits mined by open-pit, heap-leach methods when only limited information on these deposits is available. These two methods are useful for evaluating deposits that either (1) are undiscovered deposits predicted by a mineral resource assessment, or (2) have been discovered but for which little data has been collected or released. The first tool uses ordinary least-squared regression analysis of cost and operating data from selected deposits to estimate a predictive relationship between mining rate, itself estimated from deposit size, and capital and operating costs. The second method uses cost models developed by the U.S. Bureau of Mines (Camm, 1991) updated using appropriate cost indices. We find that the cost model method works best for estimating capital costs and the empirical model works best for estimating operating costs for mines to be developed in the United States.
Duell, L. F. W.
1988-01-01
In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared to other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy budget measurements. Penman-combination potential ET estimates were determined to be unusable because they overestimated actual ET. Modification in the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 300 mm at a low-density scrub site to 1,100 mm at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied. (Author 's abstract)
The estimation of galactic cosmic ray penetration and dose rates
NASA Technical Reports Server (NTRS)
Burrell, M. O.; Wright, J. J.
1972-01-01
This study is concerned with approximation methods that can be readily applied to estimate the absorbed dose rate from cosmic rays in rads - tissue or rems inside simple geometries of aluminum. The present work is limited to finding the dose rate at the center of spherical shells or behind plane slabs. The dose rate is calculated at tissue-point detectors or for thin layers of tissue. This study considers cosmic-rays dose rates for both free-space and earth-orbiting missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Huggins, Richard
2013-10-01
Precise estimation of the relative risk of motorcyclists being involved in a fatal accident compared to car drivers is difficult. Simple estimates based on the proportions of licenced drivers or riders that are killed in a fatal accident are biased as they do not take into account the exposure to risk. However, exposure is difficult to quantify. Here we adapt the ideas behind the well known induced exposure methods and use available summary data on speeding detections and fatalities for motorcycle riders and car drivers to estimate the relative risk of a fatality for motorcyclists compared to car drivers under mild assumptions. The method is applied to data on motorcycle riders and car drivers in Victoria, Australia in 2010 and a small simulation study is conducted. Copyright © 2013 Elsevier Ltd. All rights reserved.
A multi-purpose electromagnetic actuator for magnetic resonance elastography.
Feng, Yuan; Zhu, Mo; Qiu, Suhao; Shen, Ping; Ma, Shengyuan; Zhao, Xuefeng; Hu, Chun-Hong; Guo, Liang
2018-04-19
An electromagnetic actuator was designed for magnetic resonance elastography (MRE). The actuator is unique in that it is simple, portable, and capable of brain, abdomen, and phantom imagings. A custom-built control unit was used for controlling the vibration frequency and synchronizing the trigger signals. An actuation unit was built and mounted on the specifically designed clamp and holders for different imaging applications. MRE experiments with respect to gel phantoms, brain, and liver showed that the actuator could produce stable and consistent mechanical waves. Estimated shear modulus using local frequency estimate method demonstrated that the measurement results were in line with that from MRE studies using different actuation systems. The relatively easy setup procedure and simple design indicated that the actuator system had the potential to be applied in many different clinical studies. Copyright © 2018 Elsevier Inc. All rights reserved.
Methods for estimating the amount of vernal pool habitat in the northeastern United States
Van Meter, R.; Bailey, L.L.; Grant, E.H.C.
2008-01-01
The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.
Population Size Estimation of Men Who Have Sex with Men through the Network Scale-Up Method in Japan
Ezoe, Satoshi; Morooka, Takeo; Noda, Tatsuya; Sabin, Miriam Lewis; Koike, Soichi
2012-01-01
Background Men who have sex with men (MSM) are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. Methods and Findings An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel) they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. Conclusions The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods. PMID:22563366
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
A simple filter circuit for denoising biomechanical impact signals.
Subramaniam, Suba R; Georgakis, Apostolos
2009-01-01
We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.
Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F
2014-08-01
Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
15 CFR 921.13 - Management plan and environmental impact statement development.
Code of Federal Regulations, 2010 CFR
2010-01-01
... simple property interest (e.g., conservation easement), fee simple property acquisition, or a combination... simple options) to establish adequate long-term state control; an estimate of the fair market value of any property interest—which is proposed for acquisition; a schedule estimating the time required to...
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
2007-01-01
differentiability, fluid-solid interaction, error estimation, re-discretization, moving meshes 16. SECURITY CLASSIFICATION OF: 17 . LIMITATION OF 18. NUMBER...method the weight function is an indepen- dent function v = 0 6 4Ph , with v = 0 on F, if W = W0 on F1. 2. Galerkin method (GM): If Wh is an approximation...This can be demonstrated by considering a simple I-D case (like described above) in which the discretization 17 is uniform with characteristic length
A nondestructive method to estimate the chlorophyll content of Arabidopsis seedlings
Liang, Ying; Urano, Daisuke; Liao, Kang-Ling; ...
2017-04-14
Chlorophyll content decreases in plants under stress conditions, therefore it is used commonly as an indicator of plant health. Arabidopsis thaliana offers a convenient and fast way to test physiological phenotypes of mutations and treatments. But, chlorophyll measurements with conventional solvent extraction are not applicable to Arabidopsis leaves due to their small size, especially when grown on culture dishes. We provide a nondestructive method for chlorophyll measurement whereby the red, green and blue (RGB) values of a color leaf image is used to estimate the chlorophyll content from Arabidopsis leaves. The method accommodates different profiles of digital cameras by incorporatingmore » the ColorChecker chart to make the digital negative profiles, to adjust the white balance, and to calibrate the exposure rate differences caused by the environment so that this method is applicable in any environment. We chose an exponential function model to estimate chlorophyll content from the RGB values, and fitted the model parameters with physical measurements of chlorophyll contents. As further proof of utility, this method was used to estimate chlorophyll content of G protein mutants grown on different sugar to nitrogen ratios. Our method is a simple, fast, inexpensive, and nondestructive estimation of chlorophyll content of Arabidopsis seedlings. This method lead to the discovery that G proteins are important in sensing the C/N balance to control chlorophyll content in Arabidopsis.« less
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Estimation method for serial dilution experiments.
Ben-David, Avishai; Davidson, Charles E
2014-12-01
Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.
Optical Estimation of Depth and Current in a Ebb Tidal Delta Environment
NASA Astrophysics Data System (ADS)
Holman, R. A.; Stanley, J.
2012-12-01
A key limitation to our ability to make nearshore environmental predictions is the difficulty of obtaining up-to-date bathymetry measurements at a reasonable cost and frequency. Due to the high cost and complex logistics of in-situ methods, research into remote sensing approaches has been steady and has finally yielded fairly robust methods like the cBathy algorithm for optical Argus data that show good performance on simple barred beach profiles and near immunity to noise and signal problems. In May, 2012, data were collected in a more complex ebb tidal delta environment during the RIVET field experiment at New River Inlet, NC. The presence of strong reversing tidal currents led to significant errors in cBathy depths that were phase-locked to the tide. In this paper we will test methods for the robust estimation of both depths and vector currents in a tidal delta domain. In contrast to previous Fourier methods, wavenumber estimation in cBathy can be done on small enough scales to resolve interesting nearshore features.
The Effect of Finite Thickness Extent on Estimating Depth to Basement from Aeromagnetic Data
NASA Astrophysics Data System (ADS)
Blakely, R. J.; Salem, A.; Green, C. M.; Fairhead, D.; Ravat, D.
2014-12-01
Depth to basement estimation methods using various components of the spectral content of magnetic anomalies are in common use by geophysicists. Examples of these are the Tilt-Depth and SPI methods. These methods use simple models having the base of the magnetic body at infinity. Recent publications have shown that this 'infinite depth' assumption causes underestimation of the depth to the top of sources, especially in areas where the bottom of the magnetic layer is shallow, as would occur in high heat-flow regions. This error has been demonstrated in both model studies and using real data with seismic or well control. To overcome the limitation of infinite depth this contribution presents the mathematics for a finite depth contact body in the Tilt depth and SPI methods and applies it to the central Red Sea where the Curie isotherm and Moho are shallow. The difference in the depth estimation between the infinite and finite contacts is such a case is significant and can exceed 200%.
Terakado, Shingo; Glass, Thomas R; Sasaki, Kazuhiro; Ohmura, Naoya
2014-01-01
A simple new model for estimating the screening performance (false positive and false negative rates) of a given test for a specific sample population is presented. The model is shown to give good results on a test population, and is used to estimate the performance on a sampled population. Using the model developed in conjunction with regulatory requirements and the relative costs of the confirmatory and screening tests allows evaluation of the screening test's utility in terms of cost savings. Testers can use the methods developed to estimate the utility of a screening program using available screening tests with their own sample populations.
Danielson, Michelle E.; Beck, Thomas J.; Karlamangla, Arun S.; Greendale, Gail A.; Atkinson, Elizabeth J.; Lian, Yinjuan; Khaled, Alia S.; Keaveny, Tony M.; Kopperdahl, David; Ruppert, Kristine; Greenspan, Susan; Vuga, Marike; Cauley, Jane A.
2013-01-01
Purpose Simple 2-dimensional (2D) analyses of bone strength can be done with dual energy x-ray absorptiometry (DXA) data and applied to large data sets. We compared 2D analyses to 3-dimensional (3D) finite element analyses (FEA) based on quantitative computed tomography (QCT) data. Methods 213 women participating in the Study of Women’s Health across the Nation (SWAN) received hip DXA and QCT scans. DXA BMD and femoral neck diameter and axis length were used to estimate geometry for composite bending (BSI) and compressive strength (CSI) indices. These and comparable indices computed by Hip Structure Analysis (HSA) on the same DXA data were compared to indices using QCT geometry. Simple 2D engineering simulations of a fall impacting on the greater trochanter were generated using HSA and QCT femoral neck geometry; these estimates were benchmarked to a 3D FEA of fall impact. Results DXA-derived CSI and BSI computed from BMD and by HSA correlated well with each other (R= 0.92 and 0.70) and with QCT-derived indices (R= 0.83–0.85 and 0.65–0.72). The 2D strength estimate using HSA geometry correlated well with that from QCT (R=0.76) and with the 3D FEA estimate (R=0.56). Conclusions Femoral neck geometry computed by HSA from DXA data corresponds well enough to that from QCT for an analysis of load stress in the larger SWAN data set. Geometry derived from BMD data performed nearly as well. Proximal femur breaking strength estimated from 2D DXA data is not as well correlated with that derived by a 3D FEA using QCT data. PMID:22810918
The added mass forces in insect flapping wings.
Liu, Longgui; Sun, Mao
2018-01-21
The added mass forces of three-dimensional (3D) flapping wings of some representative insects, and the accuracy of the often used simple two-dimensional (2D) method, are studied. The added mass force of a flapping wing is calculated by both 3D and 2D methods, and the total aerodynamic force of the wing is calculated by the CFD method. Our findings are as following. The added mass force has a significant contribution to the total aerodynamic force of the flapping wings during and near the stroke reversals, and the shorter the stroke amplitude is, the larger the added mass force becomes. Thus the added mass force could not be neglected when using the simple models to estimate the aerodynamics force, especially for insects with relatively small stroke amplitudes. The accuracy of the often used simple 2D method is reasonably good: when the aspect ratio of the wing is greater than about 3.3, error in the added mass force calculation due to the 2D assumption is less than 9%; even when the aspect ratio is 2.8 (approximately the smallest for an insect), the error is no more than 13%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modified Matching Ronchi Test to Visualize Lens Aberrations
ERIC Educational Resources Information Center
Hassani, Kh; Ziafi, H. Hooshmand
2011-01-01
We introduce a modification to the matching Ronchi test to visualize lens aberrations with simple and inexpensive equipment available in educational optics labs. This method can help instructors and students to observe and estimate lens aberrations in real time. It is also a semi-quantitative tool for primary tests in research labs. In this work…
Estimating the Wavelength of Sodium Emission in Flame--The Easy Way
ERIC Educational Resources Information Center
Wahab, M. Farooq
2009-01-01
Simple "box spectroscopes" are not new. Different methods of building them at home using cheap diffraction gratings have been described. However, their use has often been confined to looking at street lights, discharge tubes, and enjoying the beautiful spectra of various lamps. Construction of the box spectroscope usually involves a narrow slit…
Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2010-01-01
A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…
USDA-ARS?s Scientific Manuscript database
Determining the age of malaria vectors is essential for evaluating the impact of interventions that reduce the survival of wild mosquito populations and for estimating changes in vectorial capacity. Near infra-red spectroscopy (NIRS) is a simple and non-destructive method that has been used to deter...
An Integrated Environmental Assessment of the US Mid-Atlantic Region
James D. Wickham; K.B. Jones; Kurt H. Riitters; R.V. O' Neill; R.D. Tankersley; E.R. Smith; A.C. Neale; D.J. Chaloud
1999-01-01
Many of today's environmental problems are in scope and their effects overlap and interact. We developed a simple method to provide an integrated assessment of environmental conditions and estimate cumulative impacts across a large region, by combining data on land-cover, population, roads, streams, air pollution, and topography. The integrated assessment...
NASA Astrophysics Data System (ADS)
Ishizaki, N. N.; Dairaku, K.; Ueno, G.
2016-12-01
We have developed a statistical downscaling method for estimating probabilistic climate projection using CMIP5 multi general circulation models (GCMs). A regression model was established so that the combination of weights of GCMs reflects the characteristics of the variation of observations at each grid point. Cross validations were conducted to select GCMs and to evaluate the regression model to avoid multicollinearity. By using spatially high resolution observation system, we conducted statistically downscaled probabilistic climate projections with 20-km horizontal grid spacing. Root mean squared errors for monthly mean air surface temperature and precipitation estimated by the regression method were the smallest compared with the results derived from a simple ensemble mean of GCMs and a cumulative distribution function based bias correction method. Projected changes in the mean temperature and precipitation were basically similar to those of the simple ensemble mean of GCMs. Mean precipitation was generally projected to increase associated with increased temperature and consequent increased moisture content in the air. Weakening of the winter monsoon may affect precipitation decrease in some areas. Temperature increase in excess of 4 K was expected in most areas of Japan in the end of 21st century under RCP8.5 scenario. The estimated probability of monthly precipitation exceeding 300 mm would increase around the Pacific side during the summer and the Japan Sea side during the winter season. This probabilistic climate projection based on the statistical method can be expected to bring useful information to the impact studies and risk assessments.
A rapid and highly selective method for the estimation of pyro-, tri- and orthophosphates.
Kamat, D R; Savant, V V; Sathyanarayana, D N
1995-03-01
A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP(2)O(7)]4H(2)O and [Co(en)(2)H(2)P(3)O(10)]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bereau, Tristan, E-mail: bereau@mpip-mainz.mpg.de; Lilienfeld, O. Anatole von
We estimate polarizabilities of atoms in molecules without electron density, using a Voronoi tesselation approach instead of conventional density partitioning schemes. The resulting atomic dispersion coefficients are calculated, as well as many-body dispersion effects on intermolecular potential energies. We also estimate contributions from multipole electrostatics and compare them to dispersion. We assess the performance of the resulting intermolecular interaction model from dispersion and electrostatics for more than 1300 neutral and charged, small organic molecular dimers. Applications to water clusters, the benzene crystal, the anti-cancer drug ellipticine—intercalated between two Watson-Crick DNA base pairs, as well as six macro-molecular host-guest complexes highlightmore » the potential of this method and help to identify points of future improvement. The mean absolute error made by the combination of static electrostatics with many-body dispersion reduces at larger distances, while it plateaus for two-body dispersion, in conflict with the common assumption that the simple 1/R{sup 6} correction will yield proper dissociative tails. Overall, the method achieves an accuracy well within conventional molecular force fields while exhibiting a simple parametrization protocol.« less
Comparison of heaving buoy and oscillating flap wave energy converters
NASA Astrophysics Data System (ADS)
Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.
2013-04-01
Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.
Entanglement-Assisted Weak Value Amplification
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Dressel, Justin; Brun, Todd A.
2014-07-01
Large weak values have been used to amplify the sensitivity of a linear response signal for detecting changes in a small parameter, which has also enabled a simple method for precise parameter estimation. However, producing a large weak value requires a low postselection probability for an ancilla degree of freedom, which limits the utility of the technique. We propose an improvement to this method that uses entanglement to increase the efficiency. We show that by entangling and postselecting n ancillas, the postselection probability can be increased by a factor of n while keeping the weak value fixed (compared to n uncorrelated attempts with one ancilla), which is the optimal scaling with n that is expected from quantum metrology. Furthermore, we show the surprising result that the quantum Fisher information about the detected parameter can be almost entirely preserved in the postselected state, which allows the sensitive estimation to approximately saturate the relevant quantum Cramér-Rao bound. To illustrate this protocol we provide simple quantum circuits that can be implemented using current experimental realizations of three entangled qubits.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Estimating population diversity with CatchAll
Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.
2012-01-01
Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Estimating linear effects in ANOVA designs: the easy way.
Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana
2012-09-01
Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.
Tracking and Control of a Neutral Particle Beam Using Multiple Model Adaptive Meer Filter.
1987-12-01
34 method incorporated by Zicker in 1983 [32]. Once the beam estimation problem had been solved, the problem of beam control was examined. Zicker conducted a...filter. Then, the methods applied by Meer, and later Zicker , to reduce the computational load of a simple Meer filter, will be presented. 2.5.1 Basic...number of possible methods to prune the hypothesis tree and chose the "Best Half Method" as the most viable (21). Zicker [323, applied the work of Weiss
A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy
Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,
2010-01-01
A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.
Sayers, A; Heron, J; Smith, Adac; Macdonald-Wallis, C; Gilthorpe, M S; Steele, F; Tilling, K
2017-02-01
There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes, but that only joint modelling is unbiased with nominal coverage for unconditional associations.
An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1994-01-01
Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.
Nam, Kanghyun
2015-11-11
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.
2011-01-01
Background An advantage of randomised response and non-randomised models investigating sensitive issues arises from the characteristic that individual answers about discriminating behaviour cannot be linked to the individuals. This study proposed a new fuzzy response model coined 'Single Sample Count' (SSC) to estimate prevalence of discriminating or embarrassing behaviour in epidemiologic studies. Methods The SSC was tested and compared to the established Forced Response (FR) model estimating Mephedrone use. Estimations from both SSC and FR were then corroborated with qualitative hair screening data. Volunteers (n = 318, mean age = 22.69 ± 5.87, 59.1% male) in a rural area in north Wales and a metropolitan area in England completed a questionnaire containing the SSC and FR in alternating order, and four questions canvassing opinions and beliefs regarding Mephedrone. Hair samples were screened for Mephedrone using a qualitative Liquid Chromatography-Mass Spectrometry method. Results The SSC algorithm improves upon the existing item count techniques by utilizing known population distributions and embeds the sensitive question among four unrelated innocuous questions with binomial distribution. Respondents are only asked to indicate how many without revealing which ones are true. The two probability models yielded similar estimates with the FR being between 2.6% - 15.0%; whereas the new SSC ranged between 0% - 10%. The six positive hair samples indicated that the prevalence rate in the sample was at least 4%. The close proximity of these estimates provides evidence to support the validity of the new SSC model. Using simulations, the recommended sample sizes as the function of the statistical power and expected prevalence rate were calculated. Conclusion The main advantages of the SSC over other indirect methods are: simple administration, completion and calculation, maximum use of the data and good face validity for all respondents. Owing to the key feature that respondents are not required to answer the sensitive question directly, coupled with the absence of forced response or obvious self-protective response strategy, the SSC has the potential to cut across self-protective barriers more effectively than other estimation models. This elegantly simple, quick and effective method can be successfully employed in public health research investigating compromising behaviours. PMID:21812979
Change-in-ratio methods for estimating population size
Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.
2002-01-01
Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.
NASA Astrophysics Data System (ADS)
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
Tian, Kun; Siegel, Gene; Tiwari, Ashutosh
2017-02-01
The development of simple and cost-effective methods for the detection and treatment of Hg 2+ in the environment is an important area of research due to the serious health risk that Hg 2+ poses to humans. Colorimetric sensing based on the induced aggregation of nanoparticles is of great interest since it offers a low cost, simple, and relatively rapid procedure, making it perfect for on-site analysis. Herein we report the development of a simple colorimetric sensor for the selective detection and estimation of mercury ions in water, based on chitosan stabilized gold nanoparticles (AuNPs) and 2,6-pyridinedicarboxylic acid (PDA). In the presence of Hg 2+ , PDA induces the aggregation of AuNPs, causing the solution to change colors varying from red to blue, depending on the concentration of Hg 2+ . The formation of aggregated AuNPs in the presence of Hg 2+ was confirmed using transmission electron microscopy (TEM) and UV-Vis spectroscopy. The method exhibits linearity in the range of 300nM to 5μM and shows excellent selectivity towards Hg 2+ among seventeen different metal ions and was successfully applied for the detection of Hg 2+ in spiked river water samples. The developed technique is simple and superior to the existing techniques in that it allows detection of Hg 2+ using the naked eye and simple and rapid colorimetric analysis, which eliminates the need for sophisticated instruments and sample preparation methods. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Youn, Younghan; Koo, Jeong-Seo
The complete evaluation of the side vehicle structure and the occupant protection is only possible by means of the full scale side impact crash test. But, auto part manufacturers such as door trim makers can not conduct the test especially when the vehicle is under the developing process. The main objective of this study is to obtain the design guidelines by a simple component level impact test. The relationship between the target absorption energy and impactor speed were examined using the energy absorbed by the door trim. Since each different vehicle type required different energy levels on the door trim. A simple impact test method was developed to estimate abdominal injury by measuring reaction force of the impactor. The reaction force will be converted to a certain level of the energy by the proposed formula. The target of absorption energy for door trim only and the impact speed of simple impactor are derived theoretically based on the conservation of energy. With calculated speed of dummy and the effective mass of abdomen, the energy allocated in the abdomen area of door trim was calculated. The impactor speed can be calculated based on the equivalent energy of door trim absorbed during the full crash test. With the proposed design procedure for the door trim by a simple impact test method was demonstrated to evaluate the abdominal injury. This paper describes a study that was conducted to determine sensitivity of several design factors for reducing abdominal injury values using the matrix of orthogonal array method. In conclusion, with theoretical considerations and empirical test data, the main objective, standardization of door trim design using the simple impact test method was established.
NASA Astrophysics Data System (ADS)
Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd
2017-11-01
Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.
NASA Astrophysics Data System (ADS)
Ma, Manyou; Rohling, Robert; Lampe, Lutz
2017-03-01
Synthetic transmit aperture beamforming is an increasingly used method to improve resolution in biomedical ultrasound imaging. Synthetic aperture sequential beamforming (SASB) is an implementation of this concept which features a relatively low computation complexity. Moreover, it can be implemented in a dual-stage architecture, where the first stage only applies simple single receive-focused delay-and-sum (srDAS) operations, while the second, more complex stage is performed either locally or remotely using more powerful processing. However, like traditional DAS-based beamforming methods, SASB is susceptible to inaccurate speed-of-sound (SOS) information. In this paper, we show how SOS estimation can be implemented using the srDAS beamformed image, and integrated into the dual-stage implementation of SASB, in an effort to obtain high resolution images with relatively low-cost hardware. Our approach builds on an existing per-channel radio frequency data-based direct estimation method, and applies an iterative refinement of the estimate. We use this estimate for SOS compensation, without the need to repeat the first stage beamforming. The proposed and previous methods are tested on both simulation and experimental studies. The accuracy of our SOS estimation method is on average 0.38% in simulation studies and 0.55% in phantom experiments, when the underlying SOS in the media is within the range 1450-1620 m/s. Using the estimated SOS, the beamforming lateral resolution of SASB is improved on average 52.6% in simulation studies and 50.0% in phantom experiments.
Can partial coherence interferometry be used to determine retinal shape?
Atchison, David A; Charman, W Neil
2011-05-01
To determine likely errors in estimating retinal shape using partial coherence interferometric instruments when no allowance is made for optical distortion. Errors were estimated using Gullstrand no. 1 schematic eye and variants which included a 10 diopter (D) axial myopic eye, an emmetropic eye with a gradient-index lens, and a 10.9 D accommodating eye with a gradient-index lens. Performance was simulated for two commercial instruments, the IOLMaster (Carl Zeiss Meditec) and the Lenstar LS 900 (Haag-Streit AG). The incident beam was directed toward either the center of curvature of the anterior cornea (corneal-direction method) or the center of the entrance pupil (pupil-direction method). Simple trigonometry was used with the corneal intercept and the incident beam angle to estimate retinal contour. Conics were fitted to the estimated contours. The pupil-direction method gave estimates of retinal contour that were much too flat. The cornea-direction method gave similar results for IOLMaster and Lenstar approaches. The steepness of the retinal contour was slightly overestimated, the exact effects varying with the refractive error, gradient index, and accommodation. These theoretical results suggest that, for field angles ≤30°, partial coherence interferometric instruments are of use in estimating retinal shape by the corneal-direction method with the assumptions of a regular retinal shape and no optical distortion. It may be possible to improve on these estimates out to larger field angles by using optical modeling to correct for distortion.
Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro
2016-01-01
Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
Mirus, Benjamin B.; Perkins, Kim S.
2012-01-01
The bottomless bucket (BB) approach (Nimmo et al., 2009a) is a cost-effective method for rapidly characterizing field-saturated hydraulic conductivity Kfs of soils and alluvial deposits. This practical approach is of particular value for quantifying infiltration rates in remote areas with limited accessibility. A similar approach for bedrock outcrops is also of great value for improving quantitative understanding of infiltration and recharge in rugged terrain. We develop a simple modification to the BB method for application to bedrock outcrops, which uses a non-toxic, quick-drying silicone gel to seal the BB to the bedrock. These modifications to the field method require only minor changes to the analytical solution for calculating Kfs on soils. We investigate the reproducibility of the method with laboratory experiments on a previously studied calcarenite rock and conduct a sensitivity analysis to quantify uncertainty in our predictions. We apply the BB method on both bedrock and soil for sites on Pahute Mesa, which is located in a remote area of the Nevada National Security Site. The bedrock BB tests may require monitoring over several hours to days, depending on infiltration rates, which necessitates a cover to prevent evaporative losses. Our field and laboratory results compare well to Kfs values inferred from independent reports, which suggests the modified BB method can provide useful estimates and facilitate simple hypothesis testing. The ease with which the bedrock BB method can be deployed should facilitate more rapid in-situ data collection than is possible with alternative methods for quantitative characterization of infiltration into bedrock.
The problem of the second wind turbine - a note on a common but flawed wind power estimation method
NASA Astrophysics Data System (ADS)
Gans, F.; Miller, L. M.; Kleidon, A.
2012-06-01
Several recent wind power estimates suggest that this renewable energy resource can meet all of the current and future global energy demand with little impact on the atmosphere. These estimates are calculated using observed wind speeds in combination with specifications of wind turbine size and density to quantify the extractable wind power. However, this approach neglects the effects of momentum extraction by the turbines on the atmospheric flow that would have effects outside the turbine wake. Here we show with a simple momentum balance model of the atmospheric boundary layer that this common methodology to derive wind power potentials requires unrealistically high increases in the generation of kinetic energy by the atmosphere. This increase by an order of magnitude is needed to ensure momentum conservation in the atmospheric boundary layer. In the context of this simple model, we then compare the effect of three different assumptions regarding the boundary conditions at the top of the boundary layer, with prescribed hub height velocity, momentum transport, or kinetic energy transfer into the boundary layer. We then use simulations with an atmospheric general circulation model that explicitly simulate generation of kinetic energy with momentum conservation. These simulations show that the assumption of prescribed momentum import into the atmospheric boundary layer yields the most realistic behavior of the simple model, while the assumption of prescribed hub height velocity can clearly be disregarded. We also show that the assumptions yield similar estimates for extracted wind power when less than 10% of the kinetic energy flux in the boundary layer is extracted by the turbines. We conclude that the common method significantly overestimates wind power potentials by an order of magnitude in the limit of high wind power extraction. Ultimately, environmental constraints set the upper limit on wind power potential at larger scales rather than detailed engineering specifications of wind turbine design and placement.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Statistical Approaches for Spatiotemporal Prediction of Low Flows
NASA Astrophysics Data System (ADS)
Fangmann, A.; Haberlandt, U.
2017-12-01
An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Robert N; White, Devin A; Urban, Marie L
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, C.; Brizuela, H.; Heluani, S. P.
2014-05-21
The backscattering coefficient is a magnitude whose measurement is fundamental for the characterization of materials with techniques that make use of particle beams and particularly when performing microanalysis. In this work, we report the results of an analytic method to calculate the backscattering and absorption coefficients of electrons in similar conditions to those of electron probe microanalysis. Starting on a five level states ladder model in 3D, we deduced a set of integro-differential coupled equations of the coefficients with a method know as invariant embedding. By means of a procedure proposed by authors, called method of convergence, two types ofmore » approximate solutions for the set of equations, namely complete and simple solutions, can be obtained. Although the simple solutions were initially proposed as auxiliary forms to solve higher rank equations, they turned out to be also useful for the estimation of the aforementioned coefficients. In previous reports, we have presented results obtained with the complete solutions. In this paper, we present results obtained with the simple solutions of the coefficients, which exhibit a good degree of fit with the experimental data. Both the model and the calculation method presented here can be generalized to other techniques that make use of different sorts of particle beams.« less
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Using flow cytometry to estimate pollen DNA content: improved methodology and applications
Kron, Paul; Husband, Brian C.
2012-01-01
Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID:22875815
The Lyapunov dimension and its estimation via the Leonov method
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.
2016-06-01
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.
Human action recognition based on kinematic similarity in real time
Chen, Longting; Luo, Ailing; Zhang, Sicong
2017-01-01
Human action recognition using 3D pose data has gained a growing interest in the field of computer robotic interfaces and pattern recognition since the availability of hardware to capture human pose. In this paper, we propose a fast, simple, and powerful method of human action recognition based on human kinematic similarity. The key to this method is that the action descriptor consists of joints position, angular velocity and angular acceleration, which can meet the different individual sizes and eliminate the complex normalization. The angular parameters of joints within a short sliding time window (approximately 5 frames) around the current frame are used to express each pose frame of human action sequence. Moreover, three modified KNN (k-nearest-neighbors algorithm) classifiers are employed in our method: one for achieving the confidence of every frame in the training step, one for estimating the frame label of each descriptor, and one for classifying actions. Additional estimating of the frame’s time label makes it possible to address single input frames. This approach can be used on difficult, unsegmented sequences. The proposed method is efficient and can be run in real time. The research shows that many public datasets are irregularly segmented, and a simple method is provided to regularize the datasets. The approach is tested on some challenging datasets such as MSR-Action3D, MSRDailyActivity3D, and UTD-MHAD. The results indicate our method achieves a higher accuracy. PMID:29073131
Leung, Kathy; Lipsitch, Marc; Yuen, Kwok Yung; Wu, Joseph T
2017-01-01
Summary Background Antivirals (e.g. oseltamivir) are important for mitigating influenza epidemics. In 2007, an oseltamivir-resistant seasonal A(H1N1) strain emerged and spread to global fixation within one year. This showed that antiviral-resistant (AVR) strains can be intrinsically more transmissible than their contemporaneous antiviral-sensitive (AVS) counterpart. Surveillance of AVR fitness is therefore essential. Methods We define the fitness of AVR strains as their reproductive number relative to their co-circulating AVS counterparts. We develop a simple method for real-time estimation of AVR fitness from surveillance data. This method requires only information on generation time without other specific details regarding transmission dynamics. We first use simulations to validate this method by showing that it yields unbiased and robust fitness estimates in most epidemic scenarios. We then apply this method to two retrospective case studies and one hypothetical case study. Findings We estimate that (i) the oseltamivir-resistant A(H1N1) strain that emerged in 2007 was 4% (3–5%) more transmissible than its oseltamivir-sensitive predecessor and (ii) the oseltamivir-resistant pandemic A(H1N1) strain that emerged and circulated in Japan during 2013–2014 was 24% (17–30%) less transmissible than its oseltamivir-sensitive counterpart. We show that in the event of large-scale antiviral interventions during a pandemic with co-circulation of AVS and AVR strains, our method can be used to inform optimal use of antivirals by monitoring intrinsic AVR fitness and drug pressure on the AVS strain. Conclusions We have developed a simple method that can be easily integrated into contemporary influenza surveillance systems to provide reliable estimates of AVR fitness in real time. Funding Research Fund for the Control of Infectious Disease (09080792) and a commissioned grant from the Health and Medical Research Fund from the Government of the Hong Kong Special Administrative Region, Harvard Center for Communicable Disease Dynamics from the National Institute of General Medical Sciences (grant no. U54 GM088558), Area of Excellence Scheme of the Hong Kong University Grants Committee (grant no. AoE/M-12/06). PMID:27914853
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
An investigation into exoplanet transits and uncertainties
NASA Astrophysics Data System (ADS)
Ji, Y.; Banks, T.; Budding, E.; Rhodes, M. D.
2017-06-01
A simple transit model is described along with tests of this model against published results for 4 exoplanet systems (Kepler-1, 2, 8, and 77). Data from the Kepler mission are used. The Markov Chain Monte Carlo (MCMC) method is applied to obtain realistic error estimates. Optimisation of limb darkening coefficients is subject to data quality. It is more likely for MCMC to derive an empirical limb darkening coefficient for light curves with S/N (signal to noise) above 15. Finally, the model is applied to Kepler data for 4 Kepler candidate systems (KOI 760.01, 767.01, 802.01, and 824.01) with previously unpublished results. Error estimates for these systems are obtained via the MCMC method.
Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav
2005-09-01
A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
Using virtual environment for autonomous vehicle algorithm validation
NASA Astrophysics Data System (ADS)
Levinskis, Aleksandrs
2018-04-01
This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.
Exergetic analysis of autonomous power complex for drilling rig
NASA Astrophysics Data System (ADS)
Lebedev, V. A.; Karabuta, V. S.
2017-10-01
The article considers the issue of increasing the energy efficiency of power equipment of the drilling rig. At present diverse types of power plants are used in power supply systems. When designing and choosing a power plant, one of the main criteria is its energy efficiency. The main indicator in this case is the effective efficiency factor calculated by the method of thermal balances. In the article, it is suggested to use the exergy method to determine energy efficiency, which allows to perform estimations of the thermodynamic perfection degree of the system by the example of a gas turbine plant: relative estimation (exergetic efficiency factor) and an absolute estimation. An exergetic analysis of the gas turbine plant operating in a simple scheme was carried out using the program WaterSteamPro. Exergy losses in equipment elements are calculated.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
Data-driven outbreak forecasting with a simple nonlinear growth model.
Lega, Joceline; Brown, Heidi E
2016-12-01
Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Carbon monoxide in breath in relation to smoking and carboxyhaemoglobin levels.
Wald, N J; Idle, M; Boreham, J; Bailey, A
1981-01-01
Carboxyhaemoglobin (COHb) levels were studied in 11 249 men. The distribution among the 2613 men who smoked cigarettes was well separated from that in 6641 non-smokers (including ex-smokers). The distribution for 2005 cigar and pipe smokers was intermediate, though some of the highest COHb levels occurred in cigar smokers. Using a COHb cut-off level of 2%, 81% of cigarette smokers, 35% of cigar and pipe smokers, and 1.0% of non-smokers had raised COHb levels. In a subsidiary experiment alveolar air samples were collected from 162 smokers and 25 non-smokers using a simple breath sampling technique. Carbon monoxide concentrations in alveolar breath were highly correlated with COHb levels (r = 0.97) indicating that COHb levels can be estimated reliably by measuring the concentration of carbon monoxide in breath. Alveolar carbon monoxide measurement is thus a simple method of estimating whether a person is likely to be a smoker. PMID:7314006
NASA Astrophysics Data System (ADS)
Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn
EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.
Arnold, W Ray; Warren-Hicks, William J
2007-01-01
The object of this study was to estimate site- and region-specific dissolved copper criteria for a large embayment, the Chesapeake Bay, USA. The intent is to show the utility of 2 copper saltwater quality site-specific criteria estimation models and associated region-specific criteria selection methods. The criteria estimation models and selection methods are simple, efficient, and cost-effective tools for resource managers. The methods are proposed as potential substitutes for the US Environmental Protection Agency's water effect ratio methods. Dissolved organic carbon data and the copper criteria models were used to produce probability-based estimates of site-specific copper saltwater quality criteria. Site- and date-specific criteria estimations were made for 88 sites (n = 5,296) in the Chesapeake Bay. The average and range of estimated site-specific chronic dissolved copper criteria for the Chesapeake Bay were 7.5 and 5.3 to 16.9 microg Cu/L. The average and range of estimated site-specific acute dissolved copper criteria for the Chesapeake Bay were 11.7 and 8.3 to 26.4 microg Cu/L. The results suggest that applicable national and state copper criteria can increase in much of the Chesapeake Bay and remain protective. Virginia Department of Environmental Quality copper criteria near the mouth of the Chesapeake Bay, however, need to decrease to protect species of equal or greater sensitivity to that of the marine mussel, Mytilus sp.
A Simple Method for Principal Strata Effects When the Outcome Has Been Truncated Due to Death
Chiba, Yasutaka; VanderWeele, Tyler J.
2011-01-01
In randomized trials with follow-up, outcomes such as quality of life may be undefined for individuals who die before the follow-up is complete. In such settings, restricting analysis to those who survive can give rise to biased outcome comparisons. An alternative approach is to consider the “principal strata effect” or “survivor average causal effect” (SACE), defined as the effect of treatment on the outcome among the subpopulation that would have survived under either treatment arm. The authors describe a very simple technique that can be used to assess the SACE. They give both a sensitivity analysis technique and conditions under which a crude comparison provides a conservative estimate of the SACE. The method is illustrated using data from the ARDSnet (Acute Respiratory Distress Syndrome Network) clinical trial comparing low-volume ventilation and traditional ventilation methods for individuals with acute respiratory distress syndrome. PMID:21354986
A simple method for principal strata effects when the outcome has been truncated due to death.
Chiba, Yasutaka; VanderWeele, Tyler J
2011-04-01
In randomized trials with follow-up, outcomes such as quality of life may be undefined for individuals who die before the follow-up is complete. In such settings, restricting analysis to those who survive can give rise to biased outcome comparisons. An alternative approach is to consider the "principal strata effect" or "survivor average causal effect" (SACE), defined as the effect of treatment on the outcome among the subpopulation that would have survived under either treatment arm. The authors describe a very simple technique that can be used to assess the SACE. They give both a sensitivity analysis technique and conditions under which a crude comparison provides a conservative estimate of the SACE. The method is illustrated using data from the ARDSnet (Acute Respiratory Distress Syndrome Network) clinical trial comparing low-volume ventilation and traditional ventilation methods for individuals with acute respiratory distress syndrome.
Simple and universal model for electron-impact ionization of complex biomolecules
NASA Astrophysics Data System (ADS)
Tan, Hong Qi; Mi, Zhaohong; Bettiol, Andrew A.
2018-03-01
We present a simple and universal approach to calculate the total ionization cross section (TICS) for electron impact ionization in DNA bases and other biomaterials in the condensed phase. Evaluating the electron impact TICS plays a vital role in ion-beam radiobiology simulation at the cellular level, as secondary electrons are the main cause of DNA damage in particle cancer therapy. Our method is based on extending the dielectric formalism. The calculated results agree well with experimental data and show a good comparison with other theoretical calculations. This method only requires information of the chemical composition and density and an estimate of the mean binding energy to produce reasonably accurate TICS of complex biomolecules. Because of its simplicity and great predictive effectiveness, this method could be helpful in situations where the experimental TICS data are absent or scarce, such as in particle cancer therapy.
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Gradient shadow pattern reveals refractive index of liquid
Kim, Wonkyoung; Kim, Dong Sung
2016-01-01
We propose a simple method that uses a gradient shadow pattern (GSP) to measure the refractive index nL of liquids. A light source generates a “dark-bright-dark” GSP when it is projected through through the back of a transparent, rectangular block with a cylindrical chamber that is filled with a liquid sample. We found that there is a linear relationship between nL and the proportion of the bright region in a GSP, which provides the basic principle of the proposed method. A wide range 1.33 ≤ nL ≤ 1.46 of liquids was measured in the single measurement setup with error <0.01. The proposed method is simple but robust to illuminating conditions, and does not require for any expensive or precise optical components, so we expect that it will be useful in many portable measurement systems that use nL to estimate attributes of liquid samples. PMID:27302603
Gradient shadow pattern reveals refractive index of liquid.
Kim, Wonkyoung; Kim, Dong Sung
2016-06-15
We propose a simple method that uses a gradient shadow pattern (GSP) to measure the refractive index nL of liquids. A light source generates a "dark-bright-dark" GSP when it is projected through through the back of a transparent, rectangular block with a cylindrical chamber that is filled with a liquid sample. We found that there is a linear relationship between nL and the proportion of the bright region in a GSP, which provides the basic principle of the proposed method. A wide range 1.33 ≤ nL ≤ 1.46 of liquids was measured in the single measurement setup with error <0.01. The proposed method is simple but robust to illuminating conditions, and does not require for any expensive or precise optical components, so we expect that it will be useful in many portable measurement systems that use nL to estimate attributes of liquid samples.
Ulrich, G.A.; Krumholz, L.R.; Suflita, J.M.
1997-01-01
A simplified passive extraction procedure for quantifying reduced inorganic sulfur compounds from sediments and water is presented. This method may also be used for the estimation of sulfate reduction rates. Efficient extraction of FeS, FeS(inf2), and S(sup2-) was obtained with this procedure; however, the efficiency for S(sup0) depended on the form that was tested. Passive extraction can be used with samples containing up to 20 mg of reduced sulfur. We demonstrated the utility of this technique in a determination of both sulfate reduction rates and reduced inorganic sulfur pools in marine and freshwater sediments. A side-by-side comparison of the passive extraction method with the established single-step distillation technique yielded comparable results with a fraction of the effort.
Improved methods of estimating critical indices via fractional calculus
NASA Astrophysics Data System (ADS)
Bandyopadhyay, S. K.; Bhattacharyya, K.
2002-05-01
Efficiencies of certain methods for the determination of critical indices from power-series expansions are shown to be considerably improved by a suitable implementation of fractional differentiation. In the context of the ratio method (RM), kinship of the modified strategy with the ad hoc `shifted' RM is established and the advantages are demonstrated. Further, in the course of the estimation of critical points, significant betterment of convergence properties of diagonal Padé approximants is observed on several occasions by invoking this concept. Test calculations are performed on (i) various Ising spin-1/2 lattice models for susceptibility series attended with a ferromagnetic phase transition, (ii) complex model situations involving confluent and antiferromagnetic singularities and (iii) the chain-generating functions for self-avoiding walks on triangular, square and simple cubic lattices.
Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D
2016-06-15
The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Duell, Lowell F. W.
1990-01-01
In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared with other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy-budget measurements. Penman-combination potential-ET estimates were determined to be unusable because they overestimated actual ET. Modification of the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods described in this report may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix of this report. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 301 millimeters at a low-density scrub site to 1,137 millimeters at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied.
Convex Banding of the Covariance Matrix
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
Stamey, Timothy C.
1998-01-01
Simple and reliable methods for estimating hourly streamflow are needed for the calibration and verification of a Chattahoochee River basin model between Buford Dam and Franklin, Ga. The river basin model is being developed by Georgia Department of Natural Resources, Environmental Protection Division, as part of their Chattahoochee River Modeling Project. Concurrent streamflow data collected at 19 continuous-record, and 31 partial-record streamflow stations, were used in ordinary least-squares linear regression analyses to define estimating equations, and in verifying drainage-area prorations. The resulting regression or drainage-area ratio estimating equations were used to compute hourly streamflow at the partial-record stations. The coefficients of determination (r-squared values) for the regression estimating equations ranged from 0.90 to 0.99. Observed and estimated hourly and daily streamflow data were computed for May 1, 1995, through October 31, 1995. Comparisons of observed and estimated daily streamflow data for 12 continuous-record tributary stations, that had available streamflow data for all or part of the period from May 1, 1995, to October 31, 1995, indicate that the mean error of estimate for the daily streamflow was about 25 percent.
Uncertainty in Estimates of Net Seasonal Snow Accumulation on Glaciers from In Situ Measurements
NASA Astrophysics Data System (ADS)
Pulwicki, A.; Flowers, G. E.; Radic, V.
2017-12-01
Accurately estimating the net seasonal snow accumulation (or "winter balance") on glaciers is central to assessing glacier health and predicting glacier runoff. However, measuring and modeling snow distribution is inherently difficult in mountainous terrain, resulting in high uncertainties in estimates of winter balance. Our work focuses on uncertainty attribution within the process of converting direct measurements of snow depth and density to estimates of winter balance. We collected more than 9000 direct measurements of snow depth across three glaciers in the St. Elias Mountains, Yukon, Canada in May 2016. Linear regression (LR) and simple kriging (SK), combined with cross correlation and Bayesian model averaging, are used to interpolate estimates of snow water equivalent (SWE) from snow depth and density measurements. Snow distribution patterns are found to differ considerably between glaciers, highlighting strong inter- and intra-basin variability. Elevation is found to be the dominant control of the spatial distribution of SWE, but the relationship varies considerably between glaciers. A simple parameterization of wind redistribution is also a small but statistically significant predictor of SWE. The SWE estimated for one study glacier has a short range parameter (90 m) and both LR and SK estimate a winter balance of 0.6 m w.e. but are poor predictors of SWE at measurement locations. The other two glaciers have longer SWE range parameters ( 450 m) and due to differences in extrapolation, SK estimates are more than 0.1 m w.e. (up to 40%) lower than LR estimates. By using a Monte Carlo method to quantify the effects of various sources of uncertainty, we find that the interpolation of estimated values of SWE is a larger source of uncertainty than the assignment of snow density or than the representation of the SWE value within a terrain model grid cell. For our study glaciers, the total winter balance uncertainty ranges from 0.03 (8%) to 0.15 (54%) m w.e. depending primarily on the interpolation method. Despite the challenges associated with accurately and precisely estimating winter balance, our results are consistent with the previously reported regional accumulation gradient.
Assessing the performance of dynamical trajectory estimates
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2014-06-01
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Assessing the performance of dynamical trajectory estimates.
Bröcker, Jochen
2014-06-01
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Petit and grand ensemble Monte Carlo calculations of the thermodynamics of the lattice gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murch, G.E.; Thorn, R.J.
1978-11-01
A direct Monte Carlo method for estimating the chemical potential in the petit canonical ensemble was applied to the simple cubic Ising-like lattice gas. The method is based on a simple relationship between the chemical potential and the potential energy distribution in a lattice gas at equilibrium as derived independently by Widom, and Jackson and Klein. Results are presented here for the chemical potential at various compositions and temperatures above and below the zero field ferromagnetic and antiferromagnetic critical points. The same lattice gas model was reconstructed in the form of a restricted grand canonical ensemble and results at severalmore » temperatures were compared with those from the petit canonical ensemble. The agreement was excellent in these cases.« less
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Kumar, Puspendra; Jha, Shivesh; Naved, Tanveer
2013-01-01
Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of diagnostic characters of each ingredient of Shatavaryadi churna individually was carried out. Microscopic determination, counting of identifying number, measurement of area, length and breadth of identifying characters were performed using Leica DMLS-2 microscope. The method was validated for intraday precision, linearity, specificity, repeatability, accuracy and system suitability, respectively. The method is simple, precise, sensitive, and accurate, and can be used for routine standardisation of raw materials of herbal drugs. This method gives the ratio of individual ingredients in the powdered drug so that any adulteration of genuine drug with its adulterant can be found out. The method shows very good linearity value between 0.988-0.999 for number of identifying character and area of identifying character. Percentage purity of the sample drug can be determined by using the linear equation of standard genuine drug.
Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available
ERIC Educational Resources Information Center
Hayashi, Kentaro; Arav, Marina
2006-01-01
In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
Capacitance Sensor For Nondestructive Determination Of Total Oil Content In Nuts
USDA-ARS?s Scientific Manuscript database
Earlier a simple, low cost instrument was designed and assembled in our laboratory, that could estimate the moisture content (MC) of in-shell peanuts (MC range 9% to 20%) and yellow-dent field corn (MC range 7% to 18%). In this method a sample of of in-shell peanuts or corn was placed between a set...
Estimation and applications of size-based distributions in forestry
Jeffrey H. Gove
2003-01-01
Size-based distributions arise in several contexts in forestry and ecology. Simple power relationships (e.g., basal area and diameter at breast height) between variables are one such area of interest arising from a modeling perspective. Another, probability proportional to size sampline (PPS), is found in the most widely used methods for sampling standing or dead and...
USDA-ARS?s Scientific Manuscript database
Soil moisture monitoring with in situ technology is a time consuming and costly endeavor for which a method of increasing the resolution of spatial estimates across in situ networks is necessary. Using a simple hydrologic model, the resolution of an in situ watershed network can be increased beyond...
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
ERIC Educational Resources Information Center
Warkentien, Siri; Silver, David
2016-01-01
Public schools with impressive records of serving lower-performing students are often overlooked because their average test scores, even when students are growing quickly, are lower than scores in schools that serve higher-performing students. Schools may appear to be doing poorly either because baseline achievement is not easily accounted for or…
Estimation and applications of size-biased distributions in forestry
Jeffrey H. Gove
2003-01-01
Size-biased distributions arise naturally in several contexts in forestry and ecology. Simple power relationships (e.g. basal area and diameter at breast height) between variables are one such area of interest arising from a modelling perspective. Another, probability proportional to size PPS) sampling, is found in the most widely used methods for sampling standing or...
Kurt Johnsen; Bob Teskey; Lisa Samuelson; John Butnor; David Sampson; Felipe Sanchez; Chris Maier; Steve McKeand
2004-01-01
Globally, the species most widely used for plantation forestry is loblolly pine (Pinus taeda L.). Because loblolly pine plantations are so extensive and grow so rapidly, they provide a great potential for sequestering atmospheric carbon (C). Because loblolly pine plantations are relatively simple ecosystems and because such a great volume of...
Laszlo, I.
1963-01-01
Several methods for removing interfering nucleotides, adenosine-5'-monophosphate and adenosine 5'-triphosphate from brain extracts have been studied. An enzymic method, using adenylic acid deaminase, has been found suitable. This deaminates adenosine monophosphate to 5'-inosinic acid, an inactive compound which does not influence the estimations of substance P. Owing to the adenosine triphosphatase content of the enzyme extract, adenosine triphosphate was also inactivated. For the estimation of adenosine monophosphate-deaminase activity, a simple colorimetric method is described which measures the ammonia liberated from adenosine monophosphate. Substance P in mouse brain extracts was estimated after treatment of the animals with various drugs, and after the enzymic removal of interfering nucleotides from the brain extracts. The drugs had no effect on the substance P content of mouse brain. The effect of drugs on the contractions of the guinea-pig ileum induced by substance P was also investigated, and the effect of drugs on the estimations of substance P in brain extracts is discussed. PMID:14066136
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Kalman-variant estimators for state of charge in lithium-sulfur batteries
NASA Astrophysics Data System (ADS)
Propp, Karsten; Auger, Daniel J.; Fotouhi, Abbas; Longo, Stefano; Knap, Vaclav
2017-03-01
Lithium-sulfur batteries are now commercially available, offering high specific energy density, low production costs and high safety. However, there is no commercially-available battery management system for them, and there are no published methods for determining state of charge in situ. This paper describes a study to address this gap. The properties and behaviours of lithium-sulfur are briefly introduced, and the applicability of 'standard' lithium-ion state-of-charge estimation methods is explored. Open-circuit voltage methods and 'Coulomb counting' are found to have a poor fit for lithium-sulfur, and model-based methods, particularly recursive Bayesian filters, are identified as showing strong promise. Three recursive Bayesian filters are implemented: an extended Kalman filter (EKF), an unscented Kalman filter (UKF) and a particle filter (PF). These estimators are tested through practical experimentation, considering both a pulse-discharge test and a test based on the New European Driving Cycle (NEDC). Experimentation is carried out at a constant temperature, mirroring the environment expected in the authors' target automotive application. It is shown that the estimators, which are based on a relatively simple equivalent-circuit-network model, can deliver useful results. If the three estimators implemented, the unscented Kalman filter gives the most robust and accurate performance, with an acceptable computational effort.
Estimation of the simple correlation coefficient.
Shieh, Gwowen
2010-11-01
This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.
2014-07-25
composition of simple temporal structures to a speaker diarization task with the goal of segmenting conference audio in the presence of an unknown number of...application domains including neuroimaging, diverse document selection, speaker diarization , stock modeling, and target tracking. We detail each of...recall performance than competing methods in a task of discovering articles preferred by the user • a gold-standard speaker diarization method, as
NASA Astrophysics Data System (ADS)
Kolesik, Miroslav; Suzuki, Masuo
1995-02-01
The antiferromagnetic three-state Potts model on the simple-cubic lattice is studied using the coherent-anomaly method (CAM). The CAM analysis provides the estimates for the critical exponents which indicate the XY universality class, namely α = -0.011, β = 0.351, γ = 1.309 and δ = 4.73. This observation corroborates the results of the recent Monte Carlo simulations, and disagrees with the proposal of a new universality class.
A pilot study of a simple screening technique for estimation of salivary flow.
Kanehira, Takashi; Yamaguchi, Tomotaka; Takehara, Junji; Kashiwazaki, Haruhiko; Abe, Takae; Morita, Manabu; Asano, Kouzo; Fujii, Yoshinori; Sakamoto, Wataru
2009-09-01
The purpose of this study was to develop a simple screening technique for estimation of salivary flow and to test the usefulness of the method for determining decreased salivary flow. A novel assay system comprising 3 spots containing 30 microg starch and 49.6 microg potassium iodide per spot on filter paper and a coloring reagent, based on the color reaction of iodine-starch and theory of paper chromatography, was designed. We investigated the relationship between resting whole salivary rates and the number of colored spots on the filter produced by 41 hospitalized subjects. A significant negative correlation was observed between the number of colored spots and the resting salivary flow rate (n = 41; r = -0.803; P < .01). For all complaints of decreased salivary flow (n = 9) having cutoff values <100 microL/min for the salivary flow rate, 3 colored spots appeared on the paper, whereas for healthy subjects there was < or =1 colored spot. This novel assay system might be effective for estimation of salivary flow not only in healthy but also in bedridden and disabled elderly people.
On estimating scale invariance in stratocumulus cloud fields
NASA Technical Reports Server (NTRS)
Seze, Genevieve; Smith, Leonard A.
1990-01-01
Examination of cloud radiance fields derived from satellite observations sometimes indicates the existence of a range of scales over which the statistics of the field are scale invariant. Many methods were developed to quantify this scaling behavior in geophysics. The usefulness of such techniques depends both on the physics of the process being robust over a wide range of scales and on the availability of high resolution, low noise observations over these scales. These techniques (area perimeter relation, distribution of areas, estimation of the capacity, d0, through box counting, correlation exponent) are applied to the high resolution satellite data taken during the FIRE experiment and provides initial estimates of the quality of data required by analyzing simple sets. The results of the observed fields are contrasted with those of images of objects with known characteristics (e.g., dimension) where the details of the constructed image simulate current observational limits. Throughout when cloud elements and cloud boundaries are mentioned; it should be clearly understood that by this structures in the radiance field are meant: all the boundaries considered are defined by simple threshold arguments.
Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.
Rosso, Abbey; Neitlich, Peter; Smith, Robert J
2014-01-01
Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.
Non-Destructive Lichen Biomass Estimation in Northwestern Alaska: A Comparison of Methods
Rosso, Abbey; Neitlich, Peter; Smith, Robert J.
2014-01-01
Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa “community” samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m−2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska. PMID:25079228
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes
NASA Astrophysics Data System (ADS)
Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh
2018-01-01
Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.
Projection-based circular constrained state estimation and fusion over long-haul links
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Qiang; Rao, Nageswara S.
In this paper, we consider a scenario where sensors are deployed over a large geographical area for tracking a target with circular nonlinear constraints on its motion dynamics. The sensor state estimates are sent over long-haul networks to a remote fusion center for fusion. We are interested in different ways to incorporate the constraints into the estimation and fusion process in the presence of communication loss. In particular, we consider closed-form projection-based solutions, including rules for fusing the estimates and for incorporating the constraints, which jointly can guarantee timely fusion often required in realtime systems. We test the performance ofmore » these methods in the long-haul tracking environment using a simple example.« less
Johnson, B. Thomas
1980-01-01
A laboratory method of measuring the accumulation, transfer, elimination, and degradation of xenobiotic contaminants is described for organisms in a freshwater food chain (microorganisms, filter-feeder, and fish). A flow-through diluter-system, 14C-labeled contaminants, gas and thin-layer chromatography, autoradiography, and liquid scintillation spectrometry are used in making residue determinations. Accumulation factors and various index values are developed for measuring and estimating potential accumulation of xenobiotic contaminants by aquatic organisms. The laboratory procedure is economical, simple, reproducible, and ecologically relevant.
Free energy surfaces from nonequilibrium processes without work measurement
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2006-04-01
Recent developments in statistical mechanics have allowed the estimation of equilibrium free energies from the statistics of work measurements during processes that drive the system out of equilibrium. Here a different class of processes is considered, wherein the system is prepared and released from a nonequilibrium state, and no external work is involved during its observation. For such "clamp-and-release" processes, a simple strategy for the estimation of equilibrium free energies is offered. The method is illustrated with numerical simulations and analyzed in the context of tethered single-molecule experiments.
Estimating the costs of VA ambulatory care.
Phibbs, Ciaran S; Bhandari, Aman; Yu, Wei; Barnett, Paul G
2003-09-01
This article reports how we matched Common Procedure Terminology (CPT) codes with Medicare payment rates and aggregate Veterans Affairs (VA) budget data to estimate the costs of every VA ambulatory encounter. Converting CPT codes to encounter-level costs was more complex than a simple match of Medicare reimbursements to CPT codes. About 40 percent of the CPT codes used in VA, representing about 20 percent of procedures, did not have a Medicare payment rate and required other cost estimates. Reconciling aggregated estimated costs to the VA budget allocations for outpatient care produced final VA cost estimates that were lower than projected Medicare reimbursements. The methods used to estimate costs for encounters could be replicated for other settings. They are potentially useful for any system that does not generate billing data, when CPT codes are simpler to collect than billing data, or when there is a need to standardize cost estimates across data sources.
Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles
Nam, Kanghyun
2015-01-01
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ma, Tianyu; Shao, Yiping
2008-08-01
This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.
Estimation of methanogen biomass via quantitation of coenzyme M
Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.
1999-01-01
Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass.
A simple method to incorporate water vapor absorption in the 15 microns remote temperature sounding
NASA Technical Reports Server (NTRS)
Dallu, G.; Prabhakara, C.; Conhath, B. J.
1975-01-01
The water vapor absorption in the 15 micron CO2 band, which can affect the remotely sensed temperatures near the surface, are estimated with the help of an empirical method. This method is based on the differential absorption properties of the water vapor in the 11-13 micron window region and does not require a detailed knowledge of the water vapor profile. With this approach Nimbus 4 IRIS radiance measurements are inverted to obtain temperature profiles. These calculated profiles agree with radiosonde data within about 2 C.
Kulikov, A M; Lazebnyĭ, O E; Chekunova, A I; Mitrofanov, V G
2010-01-01
The steadiness of the molecular clock was estimated in 11 Drosophila species of the virilis group by sequences of five genes by applying Tajima's Simple Method. The main characteristic of this method is the independence of its phylogenetic constructions. The obtained results have completely confirmed the conclusions drawn relying on the application of the two-cluster test and the Takezaki branch-length test. In addition, the deviation of the molecular clock has found confirmation in D. virilis evolutionary lineages.
Subsonic aircraft: Evolution and the matching of size to performance
NASA Technical Reports Server (NTRS)
Loftin, L. K., Jr.
1980-01-01
Methods for estimating the approximate size, weight, and power of aircraft intended to meet specified performance requirements are presented for both jet-powered and propeller-driven aircraft. The methods are simple and require only the use of a pocket computer for rapid application to specific sizing problems. Application of the methods is illustrated by means of sizing studies of a series of jet-powered and propeller-driven aircraft with varying design constraints. Some aspects of the technical evolution of the airplane from 1918 to the present are also briefly discussed.
A defect-driven diagnostic method for machine tool spindles
Vogl, Gregory W.; Donmez, M. Alkan
2016-01-01
Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition. PMID:28065985
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.