Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Diagnosing and dealing with multicollinearity.
Schroeder, M A
1990-04-01
The purpose of this article was to increase nurse researchers' awareness of the effects of collinear data in developing theoretical models for nursing practice. Collinear data distort the true value of the estimates generated from ordinary least-squares analysis. Theoretical models developed to provide the underpinnings of nursing practice need not be abandoned, however, because they fail to produce consistent estimates over repeated applications. It is also important to realize that multicollinearity is a data problem, not a problem associated with misspecification of a theorectical model. An investigator must first be aware of the problem, and then it is possible to develop an educated solution based on the degree of multicollinearity, theoretical considerations, and sources of error associated with alternative, biased, least-square regression techniques. Decisions based on theoretical and statistical considerations will further the development of theory-based nursing practice.
Assssment and Mapping of the Riverine Hydrokinetic Resource in the Continental United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobson, Paul T.; Ravens, Thomas M.; Cunningham, Keith W.
2012-12-14
The U.S. Department of Energy (DOE) funded the Electric Power Research Institute and its collaborative partners, University of Alaska ? Anchorage, University of Alaska ? Fairbanks, and the National Renewable Energy Laboratory, to provide an assessment of the riverine hydrokinetic resource in the continental United States. The assessment benefited from input obtained during two workshops attended by individuals with relevant expertise and from a National Research Council panel commissioned by DOE to provide guidance to this and other concurrent, DOE-funded assessments of water based renewable energy. These sources of expertise provided valuable advice regarding data sources and assessment methodology. Themore » assessment of the hydrokinetic resource in the 48 contiguous states is derived from spatially-explicit data contained in NHDPlus ?a GIS-based database containing river segment-specific information on discharge characteristics and channel slope. 71,398 river segments with mean annual flow greater than 1,000 cubic feet per second (cfs) mean discharge were included in the assessment. Segments with discharge less than 1,000 cfs were dropped from the assessment, as were river segments with hydroelectric dams. The results for the theoretical and technical resource in the 48 contiguous states were found to be relatively insensitive to the cutoff chosen. Raising the cutoff to 1,500 cfs had no effect on estimate of the technically recoverable resource, and the theoretical resource was reduced by 5.3%. The segment-specific theoretical resource was estimated from these data using the standard hydrological engineering equation that relates theoretical hydraulic power (Pth, Watts) to discharge (Q, m3 s-1) and hydraulic head or change in elevation (??, m) over the length of the segment, where ? is the specific weight of water (9800 N m-3): ??? = ? ? ?? For Alaska, which is not encompassed by NPDPlus, hydraulic head and discharge data were manually obtained from Idaho National Laboratory?s Virtual Hydropower Prospector, Google Earth, and U.S. Geological Survey gages. Data were manually obtained for the eleven largest rivers with average flow rates greater than 10,000 cfs and the resulting estimate of the theoretical resource was expanded to include rivers with discharge between 1,000 cfs and 10,000 cfs based upon the contribution of rivers in the latter flow class to the total estimate in the contiguous 48 states. Segment-specific theoretical resource was aggregated by major hydrologic region in the contiguous, lower 48 states and totaled 1,146 TWh/yr. The aggregate estimate of the Alaska theoretical resource is 235 TWh/yr, yielding a total theoretical resource estimate of 1,381 TWh/yr for the continental US. The technically recoverable resource in the contiguous 48 states was estimated by applying a recovery factor to the segment-specific theoretical resource estimates. The recovery factor scales the theoretical resource for a given segment to take into account assumptions such as minimum required water velocity and depth during low flow conditions, maximum device packing density, device efficiency, and flow statistics (e.g., the 5 percentile flow relative to the average flow rate). The recovery factor also takes account of ?back effects? ? feedback effects of turbine presence on hydraulic head and velocity. The recovery factor was determined over a range of flow rates and slopes using the hydraulic model, HEC-RAS. In the hydraulic modeling, presence of turbines was accounted for by adjusting the Manning coefficient. This analysis, which included 32 scenarios, led to an empirical function relating recovery factor to slope and discharge. Sixty-nine percent of NHDPlus segments included in the theoretical resource estimate for the contiguous 48 states had an estimated recovery factor of zero. For Alaska, data on river slope was not readily available; hence, the recovery factor was estimated based on the flow rate alone. Segment-specific estimates of the theoretical resource were multiplied by the corresponding recovery factor to estimate the technically recoverable resource. The resulting technically recoverable resource estimate for the continental United States is 120 TWh/yr.« less
Single-snapshot DOA estimation by using Compressed Sensing
NASA Astrophysics Data System (ADS)
Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin
2014-12-01
This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Cutler, Timothy D; Wang, Chong; Hoff, Steven J; Kittawornrat, Apisit; Zimmerman, Jeffrey J
2011-08-05
The median infectious dose (ID(50)) of porcine reproductive and respiratory syndrome (PRRS) virus isolate MN-184 was determined for aerosol exposure. In 7 replicates, 3-week-old pigs (n=58) respired 10l of airborne PRRS virus from a dynamic aerosol toroid (DAT) maintained at -4°C. Thereafter, pigs were housed in isolation and monitored for evidence of infection. Infection occurred at virus concentrations too low to quantify by microinfectivity assays. Therefore, exposure dose was determined using two indirect methods ("calculated" and "theoretical"). "Calculated" virus dose was derived from the concentration of rhodamine B monitored over the exposure sequence. "Theoretical" virus dose was based on the continuous stirred-tank reactor model. The ID(50) estimate was modeled on the proportion of pigs that became infected using the probit and logit link functions for both "calculated" and "theoretical" exposure doses. Based on "calculated" doses, the probit and logit ID(50) estimates were 1 × 10(-0.13)TCID(50) and 1 × 10(-0.14)TCID(50), respectively. Based on "theoretical" doses, the probit and logit ID(50) were 1 × 10(0.26)TCID(50) and 1 × 10(0.24)TCID(50), respectively. For each point estimate, the 95% confidence interval included the other three point estimates. The results indicated that MN-184 was far more infectious than PRRS virus isolate VR-2332, the only other PRRS virus isolate for which ID(50) has been estimated for airborne exposure. Since aerosol ID(50) estimates are available for only these two isolates, it is uncertain whether one or both of these isolates represent the normal range of PRRS virus infectivity by this route. Copyright © 2011 Elsevier B.V. All rights reserved.
Doubly robust nonparametric inference on the average treatment effect.
Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B
2017-12-01
Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
Satellite angular velocity estimation based on star images and optical flow techniques.
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-09-25
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.
Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-01-01
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023
The detectability of brown dwarfs - Predictions and uncertainties
NASA Technical Reports Server (NTRS)
Nelson, L. A.; Rappaport, S.; Joss, P. C.
1993-01-01
In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue
NASA Astrophysics Data System (ADS)
González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.
2013-04-01
Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.
Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.
NASA Astrophysics Data System (ADS)
Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.
2006-01-01
This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.
A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data
NASA Technical Reports Server (NTRS)
Barnes, J. R.
1993-01-01
Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.
Varadarajan, Divya; Haldar, Justin P
2017-11-01
The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.
2018-04-01
Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.
Sport fishing: a comparison of three indirect methods for estimating benefits.
Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight
1988-01-01
Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...
Theoretical methods for estimating moments of inertia of trees and boles.
John A. Sturos
1973-01-01
Presents a theoretical method for estimating the mass moments of inertia of full trees and boles about a transverse axis. Estimates from the theoretical model compared closely with experimental data on aspen and red pine trees obtained in the field by the pendulum method. The theoretical method presented may be used to estimate the mass moments of inertia and other...
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Experimental and theoretical characterization of an AC electroosmotic micromixer.
Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo
2010-01-01
We have reported on a novel microfluidic mixer based on AC electroosmosis. To elucidate the mixer characteristics, we performed detailed measurements of mixing under various experimental conditions including applied voltage, frequency and solution viscosity. The results are discussed through comparison with results obtained from a theoretical model of AC electroosmosis. As predicted from the theoretical model, we found that a larger voltage (approximately 20 V(p-p)) led to more rapid mixing, while the dependence of the mixing on frequency (1-5 kHz) was insignificant under the present experimental conditions. Furthermore, the dependence of the mixing on viscosity was successfully explained by the theoretical model, and the applicability of the mixer in viscous solution (2.83 mPa s) was confirmed experimentally. By using these results, it is possible to estimate the mixing performance under given conditions. These estimations can provide guidelines for using the mixer in microfluidic chemical analysis.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
The Stratosphere 1981: Theory and measurements
NASA Technical Reports Server (NTRS)
1982-01-01
Measurements of trace species are compared with theoretical estimates and the similarities and the differences between the two sets of data are discussed. The theoretical predictions are compared with long term trends in both column content and altitude profile of ozone as observed from ground-based and satellite instruments. The chemical kinetics and photochemistry of the stratosphere were reviewed.
Cheng, Zhongtao; Liu, Dong; Luo, Jing; Yang, Yongying; Zhou, Yudi; Zhang, Yupeng; Duan, Lulin; Su, Lin; Yang, Liming; Shen, Yibing; Wang, Kaiwei; Bai, Jian
2015-05-04
A field-widened Michelson interferometer (FWMI) is developed to act as the spectral discriminator in high-spectral-resolution lidar (HSRL). This realization is motivated by the wide-angle Michelson interferometer (WAMI) which has been used broadly in the atmospheric wind and temperature detection. This paper describes an independent theoretical framework about the application of the FWMI in HSRL for the first time. In the framework, the operation principles and application requirements of the FWMI are discussed in comparison with that of the WAMI. Theoretical foundations for designing this type of interferometer are introduced based on these comparisons. Moreover, a general performance estimation model for the FWMI is established, which can provide common guidelines for the performance budget and evaluation of the FWMI in the both design and operation stages. Examples incorporating many practical imperfections or conditions that may degrade the performance of the FWMI are given to illustrate the implementation of the modeling. This theoretical framework presents a complete and powerful tool for solving most of theoretical or engineering problems encountered in the FWMI application, including the designing, parameter calibration, prior performance budget, posterior performance estimation, and so on. It will be a valuable contribution to the lidar community to develop a new generation of HSRLs based on the FWMI spectroscopic filter.
The binned bispectrum estimator: template-based and non-parametric CMB non-Gaussianity searches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucher, Martin; Racine, Benjamin; Tent, Bartjan van, E-mail: bucher@apc.univ-paris7.fr, E-mail: benjar@uio.no, E-mail: vantent@th.u-psud.fr
2016-05-01
We describe the details of the binned bispectrum estimator as used for the official 2013 and 2015 analyses of the temperature and polarization CMB maps from the ESA Planck satellite. The defining aspect of this estimator is the determination of a map bispectrum (3-point correlation function) that has been binned in harmonic space. For a parametric determination of the non-Gaussianity in the map (the so-called f NL parameters), one takes the inner product of this binned bispectrum with theoretically motivated templates. However, as a complementary approach one can also smooth the binned bispectrum using a variable smoothing scale in ordermore » to suppress noise and make coherent features stand out above the noise. This allows one to look in a model-independent way for any statistically significant bispectral signal. This approach is useful for characterizing the bispectral shape of the galactic foreground emission, for which a theoretical prediction of the bispectral anisotropy is lacking, and for detecting a serendipitous primordial signal, for which a theoretical template has not yet been put forth. Both the template-based and the non-parametric approaches are described in this paper.« less
Allometric scaling theory applied to FIA biomass estimation
David C. Chojnacky
2002-01-01
Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.
2009-12-01
The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
1981-08-01
electro - optic effect is investigated both theoretically and experimentally. The theoretical approach is based upon W.A. Harrison’s ’Bond-Orbital Model’. The separate electronic and lattice contributions to the second-order, electro - optic susceptibility are examined within the context of this model and formulae which can accommodate any crystal structure are presented. In addition, a method for estimating the lattice response to a low frequency (dc) electric field is outlined. Finally, experimental measurements of the electro -
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Verifying reddening and extinction for Gaia DR1 TGAS giants
NASA Astrophysics Data System (ADS)
Gontcharov, George A.; Mosenkov, Aleksandr V.
2018-03-01
Gaia DR1 Tycho-Gaia Astrometric Solution parallaxes, Tycho-2 photometry, and reddening/extinction estimates from nine data sources for 38 074 giants within 415 pc from the Sun are used to compare their position in the Hertzsprung-Russell diagram with theoretical estimates, which are based on the PARSEC and MIST isochrones and the TRILEGAL model of the Galaxy with its parameters being widely varied. We conclude that (1) some systematic errors of the reddening/extinction estimates are the main uncertainty in this study; (2) any emission-based 2D reddening map cannot give reliable estimates of reddening within 415 pc due to a complex distribution of dust; (3) if a TRILEGAL's set of the parameters of the Galaxy is reliable and if the solar metallicity is Z < 0.021, then the reddening at high Galactic latitudes behind the dust layer is underestimated by all 2D reddening maps based on the dust emission observations of IRAS, COBE, and Planck and by their 3D followers (we also discuss some explanations of this underestimation); (4) the reddening/extinction estimates from recent 3D reddening map by Gontcharov, including the median reddening E(B - V) = 0.06 mag at |b| > 50°, give the best fit of the empirical and theoretical data with each other.
A biomechanical model for fibril recruitment: Evaluation in tendons and arteries.
Bevan, Tim; Merabet, Nadege; Hornsby, Jack; Watton, Paul N; Thompson, Mark S
2018-06-06
Simulations of soft tissue mechanobiological behaviour are increasingly important for clinical prediction of aneurysm, tendinopathy and other disorders. Mechanical behaviour at low stretches is governed by fibril straightening, transitioning into load-bearing at recruitment stretch, resulting in a tissue stiffening effect. Previous investigations have suggested theoretical relationships between stress-stretch measurements and recruitment probability density function (PDF) but not derived these rigorously nor evaluated these experimentally. Other work has proposed image-based methods for measurement of recruitment but made use of arbitrary fibril critical straightness parameters. The aim of this work was to provide a sound theoretical basis for estimating recruitment PDF from stress-stretch measurements and to evaluate this relationship using image-based methods, clearly motivating the choice of fibril critical straightness parameter in rat tail tendon and porcine artery. Rigorous derivation showed that the recruitment PDF may be estimated from the second stretch derivative of the first Piola-Kirchoff tissue stress. Image-based fibril recruitment identified the fibril straightness parameter that maximised Pearson correlation coefficients (PCC) with estimated PDFs. Using these critical straightness parameters the new method for estimating recruitment PDF showed a PCC with image-based measures of 0.915 and 0.933 for tendons and arteries respectively. This method may be used for accurate estimation of fibril recruitment PDF in mechanobiological simulation where fibril-level mechanical parameters are important for predicting cell behaviour. Copyright © 2018 Elsevier Ltd. All rights reserved.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Numerical computation of hurricane effects on historic coastal hydrology in Southern Florida
Swain, Eric D.; Krohn, M. Dennis; Langtimm, Catherine A.
2015-01-01
The hindcast simulation estimated hydrologic processes for the 1926 to 1932 period. It shows promise as a simulator in long-term ecological studies to test hypotheses based on theoretical or empirical-based studies at larger landscape scales.
Can You Tell the Density of the Watermelon from This Photograph?
ERIC Educational Resources Information Center
Foong, See Kit; Lim, Chim Chai
2010-01-01
Based on a photograph, the density of a watermelon floating in a pail of water is estimated with different levels of simplification--with and without consideration of refraction and three-dimensional effects. The watermelon was approximated as a sphere. The results of the theoretical estimations were verified experimentally. (Contains 6 figures.)
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.
NASA Astrophysics Data System (ADS)
Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank
2017-06-01
The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.
An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.
Brito da Silva, Leonardo Enzo; Wunsch, Donald C
2018-06-01
Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.
Haldar, Justin P.; Leahy, Richard M.
2013-01-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603
The neural representation of unexpected uncertainty during value-based decision making.
Payzan-LeNestour, Elise; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P
2013-07-10
Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled us to separately examine each form of uncertainty examined. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty, and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning. Copyright © 2013 Elsevier Inc. All rights reserved.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Doppler-shift estimation of flat underwater channel using data-aided least-square approach
NASA Astrophysics Data System (ADS)
Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing
2015-06-01
In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
The nonstationary strain filter in elastography: Part I. Frequency dependent attenuation.
Varghese, T; Ophir, J
1997-01-01
The accuracy and precision of the strain estimates in elastography depend on a myriad number of factors. A clear understanding of the various factors (noise sources) that plague strain estimation is essential to obtain quality elastograms. The nonstationary variation in the performance of the strain filter due to frequency-dependent attenuation and lateral and elevational signal decorrelation are analyzed in this and the companion paper for the cross-correlation-based strain estimator. In this paper, we focus on the role of frequency-dependent attenuation in the performance of the strain estimator. The reduction in the signal-to-noise ratio (SNRs) in the RF signal, and the center frequency and bandwidth downshift with frequency-dependent attenuation are incorporated into the strain filter formulation. Both linear and nonlinear frequency dependence of attenuation are theoretically analyzed. Monte-Carlo simulations are used to corroborate the theoretically predicted results. Experimental results illustrate the deterioration in the precision of the strain estimates with depth in a uniformly elastic phantom. Theoretical, simulation and experimental results indicate the importance of high SNRs values in the RF signals, because the strain estimation sensitivity, elastographic SNRe and dynamic range deteriorate rapidly with a decrease in the SNRs. In addition, a shift in the strain filter toward higher strains is observed at large depths in tissue due to the center frequency downshift.
Estimation of coefficients and boundary parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Murphy, K. A.
1984-01-01
Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.
Rayne, Sierra; Forest, Kaya
2014-09-19
The air-water partition coefficient (Kaw) of perfluoro-2-methyl-3-pentanone (PFMP) was estimated using the G4MP2/G4 levels of theory and the SMD solvation model. A suite of 31 fluorinated compounds was employed to calibrate the theoretical method. Excellent agreement between experimental and directly calculated Kaw values was obtained for the calibration compounds. The PCM solvation model was found to yield unsatisfactory Kaw estimates for fluorinated compounds at both levels of theory. The HENRYWIN Kaw estimation program also exhibited poor Kaw prediction performance on the training set. Based on the resulting regression equation for the calibration compounds, the G4MP2-SMD method constrained the estimated Kaw of PFMP to the range 5-8 × 10(-6) M atm(-1). The magnitude of this Kaw range indicates almost all PFMP released into the atmosphere or near the land-atmosphere interface will reside in the gas phase, with only minor quantities dissolved in the aqueous phase as the parent compound and/or its hydrate/hydrate conjugate base. Following discharge into aqueous systems not at equilibrium with the atmosphere, significant quantities of PFMP will be present as the dissolved parent compound and/or its hydrate/hydrate conjugate base.
1996-06-01
to experience the dynamics of working with Jim Scofield whose knowledge seemed complementary to my own, providing an excellent sounding board to dig...111-2 Theoretical Aspects of Deep Level Energies and Capture Cross S ectio n s...111-37 Table 111-3. Summary of the theoretical estimates and experimental measurements of the valence band offset at the AlAs-GaAs heterointerface
Zhang, Hong; Zou, Sheng; Chen, Xiyuan; Ding, Ming; Shan, Guangcun; Hu, Zhaohui; Quan, Wei
2016-07-25
We present a method for monitoring the atomic density number on site based on atomic spin exchange relaxation. When the spin polarization P ≪ 1, the atomic density numbers could be estimated by measuring magnetic resonance linewidth in an applied DC magnetic field by using an all-optical atomic magnetometer. The density measurement results showed that the experimental results the theoretical predictions had a good consistency in the investigated temperature range from 413 K to 463 K, while, the experimental results were approximately 1.5 ∼ 2 times less than the theoretical predictions estimated from the saturated vapor pressure curve. These deviations were mainly induced by the radiative heat transfer efficiency, which inevitably leaded to a lower temperature in cell than the setting temperature.
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
NASA Astrophysics Data System (ADS)
Vanini, Seyed Ali Sadough; Abolghasemzadeh, Mohammad; Assadi, Abbas
2013-07-01
Functionally graded steels with graded ferritic and austenitic regions including bainite and martensite intermediate layers produced by electroslag remelting have attracted much attention in recent years. In this article, an empirical model based on the Zener-Hollomon (Z-H) constitutive equation with generalized material constants is presented to investigate the effects of temperature and strain rate on the hot working behavior of functionally graded steels. Next, a theoretical model, generalized by strain compensation, is developed for the flow stress estimation of functionally graded steels under hot compression based on the phase mixture rule and boundary layer characteristics. The model is used for different strains and grading configurations. Specifically, the results for αβγMγ steels from empirical and theoretical models showed excellent agreement with those of experiments of other references within acceptable error.
NASA Technical Reports Server (NTRS)
Holmquist, R.; Pearl, D.
1980-01-01
Theoretical equations are derived for molecular divergence with respect to gene and protein structure in the presence of genetic events with unequal probabilities: amino acid and base compositions, the frequencies of nucleotide replacements, the usage of degenerate codons, the distribution of fixed base replacements within codons and the distribution of fixed base replacements among codons. Results are presented in the form of tables relating the probabilities of given numbers of codon base changes with respect to the original codon for the alpha hemoglobin, beta hemoglobin, myoglobin, cytochrome c and parvalbumin group gene families. Application of the calculations to the rabbit alpha and beta hemoglobin mRNAs and proteins indicates that the genes are separated by about 425 fixed based replacements distributed over 114 codon sites, which is a factor of two greater than previous estimates. The theoretical results also suggest that many more base replacements are required to effect a given gene or protein structural change than previously believed.
Haldar, Justin P; Leahy, Richard M
2013-05-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.
Huang, Jiyan; Zhang, Ying; Luo, Shan
2017-12-15
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars
Zhang, Ying; Luo, Shan
2017-01-01
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727
NASA Astrophysics Data System (ADS)
Yu, Jie; Liu, Yikan; Yamamoto, Masahiro
2018-04-01
In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.
Vast Portfolio Selection with Gross-exposure Constraints*
Fan, Jianqing; Zhang, Jingjin; Yu, Ke
2012-01-01
We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404
Effect of non-Poisson samples on turbulence spectra from laser velocimetry
NASA Technical Reports Server (NTRS)
Sree, Dave; Kjelgaard, Scott O.; Sellers, William L., III
1994-01-01
Spectral analysis of laser velocimetry (LV) data plays an important role in characterizing a turbulent flow and in estimating the associated turbulence scales, which can be helpful in validating theoretical and numerical turbulence models. The determination of turbulence scales is critically dependent on the accuracy of the spectral estimates. Spectral estimations from 'individual realization' laser velocimetry data are typically based on the assumption of a Poisson sampling process. What this Note has demonstrated is that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales.
On the development of nugget growth model for resistance spot welding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Kang, E-mail: zhoukang326@126.com, E-mail: melcai@ust.hk; Cai, Lilong, E-mail: zhoukang326@126.com, E-mail: melcai@ust.hk
2014-04-28
In this paper, we developed a general mathematical model to estimate the nugget growth process based on the heat energy delivered into the welds by the resistance spot welding. According to the principles of thermodynamics and heat transfer, and the effect of electrode force during the welding process, the shape of the nugget can be estimated. Then, a mathematical model between heat energy absorbed and nugget diameter can be obtained theoretically. It is shown in this paper that the nugget diameter can be precisely described by piecewise fractal polynomial functions. Experiments were conducted with different welding operation conditions, such asmore » welding currents, workpiece thickness, and widths, to validate the model and the theoretical analysis. All the experiments confirmed that the proposed model can predict the nugget diameters with high accuracy based on the input heat energy to the welds.« less
Estimating 3D positions and velocities of projectiles from monocular views.
Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P
2009-05-01
In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.
Stochastic stability of sigma-point Unscented Predictive Filter.
Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong
2015-07-01
In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Brenton J. Dickinson; Brett J. Butler
2013-01-01
The USDA Forest Service's National Woodland Owner Survey (NWOS) is conducted to better understand the attitudes and behaviors of private forest ownerships, which control more than half of US forestland. Inferences about the populations of interest should be based on theoretically sound estimation procedures. A recent review of the procedures disclosed an error in...
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lamb Shift of n = 1 and n = 2 States of Hydrogen-like Atoms, 1 ≤ Z ≤ 110
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yerokhin, V. A.; Shabaev, V. M.
2015-09-15
Theoretical energy levels of the n = 1 and n = 2 states of hydrogen-like atoms with the nuclear charge numbers 1 ≤ Z ≤ 110 are tabulated. The tabulation is based on ab initio quantum electrodynamics calculations performed to all orders in the nuclear binding strength parameter Zα, where α is the fine structure constant. Theoretical errors due to various effects are critically examined and estimated.
Stability of Castering Wheels for Aircraft Landing Gears
NASA Technical Reports Server (NTRS)
Kantrowitz, Arthur
1940-01-01
A theoretical study was made of the shimmy of castering wheels. The theory is based on the discovery of a phenomenon called kinematic shimmy. Experimental checks, use being made of a model having low-pressure tires, are reported and the applicability of the results to full scale is discussed. Theoretical methods of estimating the spindle viscous damping and the spindle solid friction necessary to avoid shimmy are given. A new method of avoiding shimmy -- lateral freedom -- is introduced.
Relation between L-band soil emittance and soil water content
NASA Technical Reports Server (NTRS)
Stroosnijder, L.; Lascano, R. J.; Van Bavel, C. H. M.; Newton, R. W.
1986-01-01
An experimental relation between soil emittance (E) at L-band and soil surface moisture content (M) is compared with a theoretical one. The latter depends on the soil dielectric constant, which is a function of both soil moisture content and of soil texture. It appears that a difference of 10 percent in the surface clay content causes a change in the estimate of M on the order of 0.02 cu m/cu m. This is based on calculations with a model that simulates the flow of water and energy, in combination with a radiative transfer model. It is concluded that an experimental determination of the E-M relation for each soil type is not required, and that a rough estimate of the soil texture will lead to a sufficiently accurate estimate of soil moisture from a general, theoretical relationship obtained by numerical simulation.
A Gendered Lifestyle-Routine Activity Approach to Explaining Stalking Victimization in Canada.
Reyns, Bradford W; Henson, Billy; Fisher, Bonnie S; Fox, Kathleen A; Nobles, Matt R
2016-05-01
Research into stalking victimization has proliferated over the last two decades, but several research questions related to victimization risk remain unanswered. Accordingly, the present study utilized a lifestyle-routine activity theoretical perspective to identify risk factors for victimization. Gender-based theoretical models also were estimated to assess the possible moderating effects of gender on the relationship between lifestyle-routine activity concepts and victimization risk. Based on an analysis of a representative sample of more than 15,000 residents of Canada from the Canadian General Social Survey (GSS), results suggested conditional support for lifestyle-routine activity theory and for the hypothesis that predictors of stalking victimization may be gender based. © The Author(s) 2015.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy. PMID:22163641
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
NASA Astrophysics Data System (ADS)
Ahmed, Mousumi
Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.
NASA Astrophysics Data System (ADS)
Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.
2007-09-01
The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
Robot path planning algorithm based on symbolic tags in dynamic environment
NASA Astrophysics Data System (ADS)
Vokhmintsev, A.; Timchenko, M.; Melnikov, A.; Kozko, A.; Makovetskii, A.
2017-09-01
The present work will propose a new heuristic algorithms for path planning of a mobile robot in an unknown dynamic space that have theoretically approved estimates of computational complexity and are approbated for solving specific applied problems.
NASA Astrophysics Data System (ADS)
Kim, G.; Che, I. Y.
2017-12-01
We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.
Signal recognition and parameter estimation of BPSK-LFM combined modulation
NASA Astrophysics Data System (ADS)
Long, Chao; Zhang, Lin; Liu, Yu
2015-07-01
Intra-pulse analysis plays an important role in electronic warfare. Intra-pulse feature abstraction focuses on primary parameters such as instantaneous frequency, modulation, and symbol rate. In this paper, automatic modulation recognition and feature extraction for combined BPSK-LFM modulation signals based on decision theoretic approach is studied. The simulation results show good recognition effect and high estimation precision, and the system is easy to be realized.
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2018-01-01
Aims: We aim to perform a theoretical evaluation of the impact of the mass loss indetermination on asteroseismic grid based estimates of masses, radii, and ages of stars in the red giant branch (RGB) phase. Methods: We adopted the SCEPtER pipeline on a grid spanning the mass range [0.8; 1.8] M⊙. As observational constraints, we adopted the star effective temperatures, the metallicity [Fe/H], the average large frequency spacing Δν, and the frequency of maximum oscillation power νmax. The mass loss was modelled following a Reimers parametrization with the two different efficiencies η = 0.4 and η = 0.8. Results: In the RGB phase, the average random relative error (owing only to observational uncertainty) on mass and age estimates is about 8% and 30% respectively. The bias in mass and age estimates caused by the adoption of a wrong mass loss parameter in the recovery is minor for the vast majority of the RGB evolution. The biases get larger only after the RGB bump. In the last 2.5% of the RGB lifetime the error on the mass determination reaches 6.5% becoming larger than the random error component in this evolutionary phase. The error on the age estimate amounts to 9%, that is, equal to the random error uncertainty. These results are independent of the stellar metallicity [Fe/H] in the explored range. Conclusions: Asteroseismic-based estimates of stellar mass, radius, and age in the RGB phase can be considered mass loss independent within the range (η ∈ [0.0,0.8]) as long as the target is in an evolutionary phase preceding the RGB bump.
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
NASA Astrophysics Data System (ADS)
Nammi, Srinagalakshmi; Vasa, Nilesh J.; Gurusamy, Balaganesan; Mathur, Anil C.
2017-09-01
A plasma shielding phenomenon and its influence on micromachining is studied experimentally and theoretically for laser wavelengths of 355 nm, 532 nm and 1064 nm. A time resolved pump-probe technique is proposed and demonstrated by splitting a single nanosecond Nd3+:YAG laser into an ablation laser (pump laser) and a probe laser to understand the influence of plasma shielding on laser ablation of copper (Cu) clad on polyimide thin films. The proposed nanosecond pump-probe technique allows simultaneous measurement of the absorption characteristics of plasma produced during Cu film ablation by the pump laser. Experimental measurements of the probe intensity distinctly show that the absorption by the ablated plume increases with increase in the pump intensity, as a result of plasma shielding. Theoretical estimation of the intensity of the transmitted pump beam based on the thermo-temporal modeling is in qualitative agreement with the pump-probe based experimental measurements. The theoretical estimate of the depth attained for a single pulse with high pump intensity value on a Cu thin film is limited by the plasma shielding of the incident laser beam, similar to that observed experimentally. Further, the depth of micro-channels produced shows a similar trend for all three wavelengths, however, the channel depth achieved is lesser at the wavelength of 1064 nm.
Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wildenschild, D; Berge, P A; Berryman, K G
1999-01-15
The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less
Parmar, Jyotsana J; Das, Dibyendu; Padinhateeri, Ranjith
2016-02-29
It is being increasingly realized that nucleosome organization on DNA crucially regulates DNA-protein interactions and the resulting gene expression. While the spatial character of the nucleosome positioning on DNA has been experimentally and theoretically studied extensively, the temporal character is poorly understood. Accounting for ATPase activity and DNA-sequence effects on nucleosome kinetics, we develop a theoretical method to estimate the time of continuous exposure of binding sites of non-histone proteins (e.g. transcription factors and TATA binding proteins) along any genome. Applying the method to Saccharomyces cerevisiae, we show that the exposure timescales are determined by cooperative dynamics of multiple nucleosomes, and their behavior is often different from expectations based on static nucleosome occupancy. Examining exposure times in the promoters of GAL1 and PHO5, we show that our theoretical predictions are consistent with known experiments. We apply our method genome-wide and discover huge gene-to-gene variability of mean exposure times of TATA boxes and patches adjacent to TSS (+1 nucleosome region); the resulting timescale distributions have non-exponential tails. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Hydropower assessment of Bolivia—A multisource satellite data and hydrologic modeling approach
Velpuri, Naga Manohar; Pervez, Shahriar; Cushing, W. Matthew
2016-11-28
This study produced a geospatial database for use in a decision support system by the Bolivian authorities to investigate further development and investment potentials in sustainable hydropower in Bolivia. The study assessed theoretical hydropower of all 1-kilometer (km) stream segments in the country using multisource satellite data and a hydrologic modeling approach. With the assessment covering the 2 million square kilometer (km2) region influencing Bolivia’s drainage network, the potential hydropower figures are based on theoretical yield assuming that the systems generating the power are 100 percent efficient. There are several factors to consider when determining the real-world or technical power potential of a hydropower system, and these factors can vary depending on local conditions. Since this assessment covers a large area, it was necessary to reduce these variables to the two that can be modeled consistently throughout the region, streamflow or discharge, and elevation drop or head. First, the Shuttle Radar Topography Mission high-resolution 30-meter (m) digital elevation model was used to identify stream segments with greater than 10 km2 of upstream drainage. We applied several preconditioning processes to the 30-m digital elevation model to reduce errors and improve the accuracy of stream delineation and head height estimation. A total of 316,500 1-km stream segments were identified and used in this study to assess the total theoretical hydropower potential of Bolivia. Precipitation observations from a total of 463 stations obtained from the Bolivian Servicio Nacional de Meteorología e Hidrología (Bolivian National Meteorology and Hydrology Service) and the Brazilian Agência Nacional de Águas (Brazilian National Water Agency) were used to validate six different gridded precipitation estimates for Bolivia obtained from various sources. Validation results indicated that gridded precipitation estimates from the Tropical Rainfall Measuring Mission (TRMM) reanalysis product (3B43) had the highest accuracies. The coarse-resolution (25-km) TRMM data were disaggregated to 5-km pixels using climatology information obtained from the Climate Hazards Group Infrared Precipitation with Stations dataset. About a 17-percent bias was observed in the disaggregated TRMM estimates, which was corrected using the station observations. The bias-corrected, disaggregated TRMM precipitation estimate was used to compute stream discharge using a regionalization approach. In regionalization approach, required homogeneous regions for Bolivia were derived from precipitation patterns and topographic characteristics using a k-means clustering approach. Using the discharge and head height estimates for each 1-km stream segment, we computed hydropower potential for 316,490 stream segments within Bolivia and that share borders with Bolivia. The total theoretical hydropower potential (TTHP) of these stream segments was found to be 212 gigawatts (GW). Out of this total, 77.4 GW was within protected areas where hydropower projects cannot be developed; hence, the remaining total theoretical hydropower in Bolivia (outside the protected areas) was estimated as 135 GW. Nearly 1,000 1-km stream segments, however, were within the boundaries of existing hydropower projects. The TTHP of these stream segments was nearly 1.4 GW, so the residual TTHP of the streams in Bolivia was estimated as 133 GW. Care should be exercised to understand and interpret the TTHP identified in this study because all the stream segments identified and assessed in this study cannot be harnessed to their full capacity; furthermore, factors such as required environmental flows, efficiency, economics, and feasibility need to be considered to better identify a more real-world hydropower potential. If environmental flow requirements of 20–40 percent are considered, the total theoretical power available reduces by 60–80 percent. In addition, a 0.72 efficiency factor further reduces the estimation by another 28 percent. This study provides the base theoretical hydropower potential for Bolivia, the next step is to identify optimal hydropower plant locations and factor in the principles to appraise a real-world power potential in Bolivia.
A feature-based inference model of numerical estimation: the split-seed effect.
Murray, Kyle B; Brown, Norman R
2009-07-01
Prior research has identified two modes of quantitative estimation: numerical retrieval and ordinal conversion. In this paper we introduce a third mode, which operates by a feature-based inference process. In contrast to prior research, the results of three experiments demonstrate that people estimate automobile prices by combining metric information associated with two critical features: product class and brand status. In addition, Experiments 2 and 3 demonstrated that when participants are seeded with the actual current base price of one of the to-be-estimated vehicles, they respond by revising the general metric and splitting the information carried by the seed between the two critical features. As a result, the degree of post-seeding revision is directly related to the number of these features that the seed and the transfer items have in common. The paper concludes with a general discussion of the practical and theoretical implications of our findings.
Space Tug Docking Study. Volume 5: Cost Analysis
NASA Technical Reports Server (NTRS)
1976-01-01
The cost methodology, summary cost data, resulting cost estimates by Work Breakdown Structure (WBS), technical characteristics data, program funding schedules and the WBS for the costing are discussed. Cost estimates for two tasks of the study are reported. The first, developed cost estimates for design, development, test and evaluation (DDT&E) and theoretical first unit (TFU) at the component level (Level 7) for all items reported in the data base. Task B developed total subsystem DDT&E costs and funding schedules for the three candidate Rendezvous and Docking Systems: manual, autonomous, and hybrid.
Layover and shadow detection based on distributed spaceborne single-baseline InSAR
NASA Astrophysics Data System (ADS)
Huanxin, Zou; Bin, Cai; Changzhou, Fan; Yun, Ren
2014-03-01
Distributed spaceborne single-baseline InSAR is an effective technique to get high quality Digital Elevation Model. Layover and Shadow are ubiquitous phenomenon in SAR images because of geometric relation of SAR imaging. In the signal processing of single-baseline InSAR, the phase singularity of Layover and Shadow leads to the phase difficult to filtering and unwrapping. This paper analyzed the geometric and signal model of the Layover and Shadow fields. Based on the interferometric signal autocorrelation matrix, the paper proposed the signal number estimation method based on information theoretic criteria, to distinguish Layover and Shadow from normal InSAR fields. The effectiveness and practicability of the method proposed in the paper are validated in the simulation experiments and theoretical analysis.
Maes, W H; Steppe, K
2012-08-01
As evaporation of water is an energy-demanding process, increasing evapotranspiration rates decrease the surface temperature (Ts) of leaves and plants. Based on this principle, ground-based thermal remote sensing has become one of the most important methods for estimating evapotranspiration and drought stress and for irrigation. This paper reviews its application in agriculture. The review consists of four parts. First, the basics of thermal remote sensing are briefly reviewed. Second, the theoretical relation between Ts and the sensible and latent heat flux is elaborated. A modelling approach was used to evaluate the effect of weather conditions and leaf or vegetation properties on leaf and canopy temperature. Ts increases with increasing air temperature and incoming radiation and with decreasing wind speed and relative humidity. At the leaf level, the leaf angle and leaf dimension have a large influence on Ts; at the vegetation level, Ts is strongly impacted by the roughness length; hence, by canopy height and structure. In the third part, an overview of the different ground-based thermal remote sensing techniques and approaches used to estimate drought stress or evapotranspiration in agriculture is provided. Among other methods, stress time, stress degree day, crop water stress index (CWSI), and stomatal conductance index are discussed. The theoretical models are used to evaluate the performance and sensitivity of the most important methods, corroborating the literature data. In the fourth and final part, a critical view on the future and remaining challenges of ground-based thermal remote sensing is presented.
Strength Property Estimation for Dry, Cohesionless Soils Using the Military Cone Penetrometer
1992-05-01
by Meier and Baladi (1988). Their methodology is based on a theoretical formulation of the CI problem using cavity expansion theory to relate cone... Baladi (1981), incorporates three mechanical properties (cohesion, fric- tion angle, and shear modulus) and the total unit weight. Obviously, these four...unknown soil propertieE cannot be back-calculated directly from a single CI measurement. To ameliorate this problem, Meier and Baladi estimate the total
Theoretical model of ruminant adipose tissue metabolism in relation to the whole animal.
Baldwin, R L; Yang, Y T; Crist, K; Grichting, G
1976-09-01
Based on theoretical considerations and experimental data, estimates of contributions of adipose tissue to energy expenditures in a lactating cow and a growing steer were developed. The estimates indicate that adipose energy expenditures range between 5 and 10% of total animal heat production dependent on productive function and diet. These energy expenditures can be partitioned among maintenance (3%), lipogenesis (1-5%) and lipolysis and triglyceride resynthesis (less thatn 1.0%). Specific sites at which acute and chronic effectors can act to produce changes in adipose function, and changes in adipose function produced by diet and during pregnancy, lactation and aging were discussed with emphasis being placed on the need for additional, definitive studies of specific interactions among pregnancy, diet, age, lactation and growth in producing ruminants.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Williams, Christopher; Dugger, Bruce D.; Brasher, Michael G.; Coluccy, John M.; Cramer, Dane M.; Eadie, John M.; Gray, Matthew J.; Hagy, Heath M.; Livolsi, Mark; McWilliams, Scott R.; Petrie, Matthew; Soulliere, Gregory J.; Tirpak, John M.; Webb, Elisabeth B.
2014-01-01
Population-based habitat conservation planning for migrating and wintering waterfowl in North America is carried out by habitat Joint Venture (JV) initiatives and is based on the premise that food can limit demography (i.e. food limitation hypothesis). Consequently, planners use bioenergetic models to estimate food (energy) availability and population-level energy demands at appropriate spatial and temporal scales, and translate these values into regional habitat objectives. While simple in principle, there are both empirical and theoretical challenges associated with calculating energy supply and demand including: 1) estimating food availability, 2) estimating the energy content of specific foods, 3) extrapolating site-specific estimates of food availability to landscapes for focal species, 4) applicability of estimates from a single species to other species, 5) estimating resting metabolic rate, 6) estimating cost of daily behaviours, and 7) estimating costs of thermoregulation or tissue synthesis. Most models being used are daily ration models (DRMs) whose set of simplifying assumptions are well established and whose use is widely accepted and feasible given the empirical data available to populate such models. However, DRMs do not link habitat objectives to metrics of ultimate ecological importance such as individual body condition or survival, and largely only consider food-producing habitats. Agent-based models (ABMs) provide a possible alternative for creating more biologically realistic models under some conditions; however, ABMs require different types of empirical inputs, many of which have yet to be estimated for key North American waterfowl. Decisions about how JVs can best proceed with habitat conservation would benefit from the use of sensitivity analyses that could identify the empirical and theoretical uncertainties that have the greatest influence on efforts to estimate habitat carrying capacity. Development of ABMs at restricted, yet biologically relevant spatial scales, followed by comparisons of their outputs to those generated from more simplistic, deterministic models can provide a means of assessing degrees of dissimilarity in how alternative models describe desired landscape conditions for migrating and wintering waterfowl.
More data, less information? Potential for nonmonotonic information growth using GEE.
Shoben, Abigail B; Rudser, Kyle D; Emerson, Scott S
2017-01-01
Statistical intuition suggests that increasing the total number of observations available for analysis should increase the precision with which parameters can be estimated. Such monotonic growth of statistical information is of particular importance when data are analyzed sequentially, such as in confirmatory clinical trials. However, monotonic information growth is not always guaranteed, even when using a valid, but inefficient estimator. In this article, we demonstrate the theoretical possibility of nonmonotonic information growth when using generalized estimating equations (GEE) to estimate a slope and provide intuition for why this possibility exists. We use theoretical and simulation-based results to characterize situations that may result in nonmonotonic information growth. Nonmonotonic information growth is most likely to occur when (1) accrual is fast relative to follow-up on each individual, (2) correlation among measurements from the same individual is high, and (3) measurements are becoming more variable further from randomization. In situations that may lead to nonmonotonic information growth, study designers should plan interim analyses to avoid situations most likely to result in nonmonotonic information growth.
USDA-ARS?s Scientific Manuscript database
The ability of remote sensing-based surface energy balance (SEB) models to track water stress in rain-fed switchgrass has not been explored yet. In this paper, the theoretical framework of crop water stress index (CWSI) was utilized to estimate CWSI in rain-fed switchgrass (Panicum virgatum L.) usin...
Registration of surface structures using airborne focused ultrasound.
Sundström, N; Börjesson, P O; Holmer, N G; Olsson, L; Persson, H W
1991-01-01
A low-cost measuring system, based on a personal computer combined with standard equipment for complex measurements and signal processing, has been assembled. Such a system increases the possibilities for small hospitals and clinics to finance advanced measuring equipment. A description of equipment developed for airborne ultrasound together with a personal computer-based system for fast data acquisition and processing is given. Two air-adapted ultrasound transducers with high lateral resolution have been developed. Furthermore, a few results for fast and accurate estimation of signal arrival time are presented. The theoretical estimation models developed are applied to skin surface profile registrations.
NASA Astrophysics Data System (ADS)
Carcione, José M.; Gei, Davide
2004-05-01
We estimate the concentration of gas hydrate at the Mallik 2L-38 research site using P- and S-wave velocities obtained from well logging and vertical seismic profiles (VSP). The theoretical velocities are obtained from a generalization of Gassmann's modulus to three phases (rock frame, gas hydrate and fluid). The dry-rock moduli are estimated from the log profiles, in sections where the rock is assumed to be fully saturated with water. We obtain hydrate concentrations up to 75%, average values of 37% and 21% from the VSP P- and S-wave velocities, respectively, and 60% and 57% from the sonic-log P- and S-wave velocities, respectively. The above averages are similar to estimations obtained from hydrate dissociation modeling and Archie methods. The estimations based on the P-wave velocities are more reliable than those based on the S-wave velocities.
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
A Pseudorange Measurement Scheme Based on Snapshot for Base Station Positioning Receivers.
Mo, Jun; Deng, Zhongliang; Jia, Buyun; Bian, Xinmei
2017-12-01
Digital multimedia broadcasting signal is promised to be a wireless positioning signal. This paper mainly studies a multimedia broadcasting technology, named China mobile multimedia broadcasting (CMMB), in the context of positioning. Theoretical and practical analysis on the CMMB signal suggests that the existing CMMB signal does not have the meter positioning capability. So, the CMMB system has been modified to achieve meter positioning capability by multiplexing the CMMB signal and pseudo codes in the same frequency band. The time difference of arrival (TDOA) estimation method is used in base station positioning receivers. Due to the influence of a complex fading channel and the limited bandwidth of receivers, the regular tracking method based on pseudo code ranging is difficult to provide continuous and accurate TDOA estimations. A pseudorange measurement scheme based on snapshot is proposed to solve the problem. This algorithm extracts the TDOA estimation from the stored signal fragments, and utilizes the Taylor expansion of the autocorrelation function to improve the TDOA estimation accuracy. Monte Carlo simulations and real data tests show that the proposed algorithm can significantly reduce the TDOA estimation error for base station positioning receivers, and then the modified CMMB system achieves meter positioning accuracy.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Estimating wildfire behavior and effects
Frank A. Albini
1976-01-01
This paper presents a brief survey of the research literature on wildfire behavior and effects and assembles formulae and graphical computation aids based on selected theoretical and empirical models. The uses of mathematical fire behavior models are discussed, and the general capabilities and limitations of currently available models are outlined.
Tsallis q-triplet, intermittent turbulence and Portevin-Le Chatelier effect
NASA Astrophysics Data System (ADS)
Iliopoulos, A. C.; Aifantis, E. C.
2018-05-01
In this paper, we extend a previous study concerning Portevin-LeChatelier (PLC) effect and Tsallis statistics (Iliopoulos et al., 2015). In particular, we estimate Tsallis' q-triplet, namely {qstat, qsens, qrel} for two sets of stress serration time series concerning the deformation of Cu-15%Al alloy corresponding to different deformation temperatures and thus types (A and B) of PLC bands. The results concerning the stress serrations analysis reveal that Tsallis q- triplet attains values different from unity ({qstat, qsens, qrel} ≠ {1,1,1}). In particular, PLC type A bands' serrations were found to follow Tsallis super-q-Gaussian, non-extensive, sub-additive, multifractal statistics indicating that the underlying dynamics are at the edge of chaos, characterized by global long range correlations and power law scaling. For PLC type B bands' serrations, the results revealed a Tsallis sub-q-Gaussian, non-extensive, super-additive, multifractal statistical profile. In addition, our results reveal also significant differences in statistical and dynamical features, indicating important variations of the stress field dynamics in terms of rate of entropy production, relaxation dynamics and non-equilibrium meta-stable stationary states. We also estimate parameters commonly used for characterizing fully developed turbulence, such as structure functions and flatness coefficient (F), in order to provide further information about jerky flow underlying dynamics. Finally, we use two multifractal models developed to describe turbulence, namely Arimitsu and Arimitsu (A&A) [2000, 2001] theoretical model which is based on Tsallis statistics and p-model to estimate theoretical multifractal spectrums f(a). Furthermore, we estimate flatness coefficient (F) using a theoretical formula based on Tsallis statistics. The theoretical results are compared with the experimental ones showing a remarkable agreement between modeling and experiment. Finally, the results of this study verify, as well as, extend previous studies which stated that type B and type A PLC bands underlying dynamics are connected with distinct dynamical behavior, namely chaotic behavior for the first and self-organized critical (SOC) behavior for the latter, while they shed new light concerning the turbulent character of the PLC jerky flow.
Electron Capture in Slow Collision of He^2++H : Revisited
NASA Astrophysics Data System (ADS)
Krstic, Ps
2003-05-01
Very early experimental data (Fite et al. al., Proc. R. Soc. A 268, 527 (1962)) for He^2++H, recent ORNL measurements for Ne^2+ + H and our theoretical estimates suggest that the electron capture cross sections for these strongly exoergic collision systems drop slower toward low collision energies than expected from previous theories. We perform a theoretical study to establish and understand the true nature of this controversy. The calculations are based on the Hidden Crossings MOCC method, augmented with rotational and turning point effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yueqi; Lava, Pascal; Reu, Phillip
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...
2015-12-23
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
NASA Astrophysics Data System (ADS)
Bialas, A.; Peschanski, R.; Royon, Ch.
1998-06-01
It is argued that the QCD dipole picture allows us to build a unified theoretical description, based on Balitskii-Fadin-Kuraev-Lipatov dynamics, of the total and diffractive nucleon structure functions. This description is in qualitative agreement with the present collection of data obtained by the H1 Collaboration. More precise theoretical estimates, in particular the determination of the normalizations and proton transverse momentum behavior of the diffractive components, are shown to be required in order to reach definite conclusions.
GLONASS orbit/clock combination in VNIIFTRI
NASA Astrophysics Data System (ADS)
Bezmenov, I.; Pasynok, S.
2015-08-01
An algorithm and a program for GLONASS satellites orbit/clock combination based on daily precise orbits submitted by several Analytic Centers were developed. Some theoretical estimates for combine orbit positions RMS were derived. It was shown that under condition that RMS of satellite orbits provided by the Analytic Centers during a long time interval are commensurable the RMS of combine orbit positions is no greater than RMS of other satellite positions estimated by any of the Analytic Centers.
Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.
Yanagimoto, T; Kashiwagi, N
1990-01-01
A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
USDA-ARS?s Scientific Manuscript database
Random mating (i.e., panmixis) is a fundamental assumption in quantitative genetics. In outcrossing bee-pollinated perennial forage legume polycrosses, mating is assumed by default to follow theoretical random mating. This assumption informs breeders of expected inbreeding estimates based on polycro...
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Asymptotics of nonparametric L-1 regression models with dependent data
ZHAO, ZHIBIAO; WEI, YING; LIN, DENNIS K.J.
2013-01-01
We investigate asymptotic properties of least-absolute-deviation or median quantile estimates of the location and scale functions in nonparametric regression models with dependent data from multiple subjects. Under a general dependence structure that allows for longitudinal data and some spatially correlated data, we establish uniform Bahadur representations for the proposed median quantile estimates. The obtained Bahadur representations provide deep insights into the asymptotic behavior of the estimates. Our main theoretical development is based on studying the modulus of continuity of kernel weighted empirical process through a coupling argument. Progesterone data is used for an illustration. PMID:24955016
Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam
2016-10-01
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
Potential benefits of remote sensing: Theoretical framework and empirical estimate
NASA Technical Reports Server (NTRS)
Eisgruber, L. M.
1972-01-01
A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.
Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system
NASA Astrophysics Data System (ADS)
Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye
2017-12-01
In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.
Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki
2016-07-08
This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.
Elseman, Ahmed Mourtada; Shalan, Ahmed Esmail; Sajid, Sajid; Rashad, Mohamed Mohamed; Hassan, Ali Mostafa; Li, Meicheng
2018-04-11
Toxicity and chemical instability issues of halide perovskites based on organic-inorganic lead-containing materials still remain as the main drawbacks for perovskite solar cells (PSCs). Herein, we discuss the preparation of copper (Cu)-based hybrid materials, where we replace lead (Pb) with nontoxic Cu metal for lead-free PSCs, and investigate their potential toward solar cell applications based on experimental and theoretical studies. The formation of (CH 3 NH 3 ) 2 CuX 4 [(CH 3 NH 3 ) 2 CuCl 4 , (CH 3 NH 3 ) 2 CuCl 2 I 2 , and (CH 3 NH 3 ) 2 CuCl 2 Br 2 ] was discussed in details. Furthermore, it was found that chlorine (Cl - ) in the structure is critical for the stabilization of the formed compounds. Cu-based perovskite-like materials showed attractive absorbance features extended to the near-infrared range, with appropriate band gaps. Green photoluminescence of these materials was obtained because of Cu + ions. The power conversion efficiency was measured experimentally and estimated theoretically for different architectures of solar cell devices.
Effective Tree Scattering and Opacity at L-Band
NASA Technical Reports Server (NTRS)
Kurum, Mehmet; O'Neill, Peggy E.; Lang, Roger H.; Joseph, Alicia T.; Cosh, Michael H.; Jackson, Thomas J.
2011-01-01
This paper investigates vegetation effects at L-band by using a first-order radiative transfer (RT) model and truck-based microwave measurements over natural conifer stands to assess the applicability of the tau-omega) model over trees. The tau-omega model is a zero-order RT solution that accounts for vegetation effects with effective vegetation parameters (vegetation opacity and single-scattering albedo), which represent the canopy as a whole. This approach inherently ignores multiple-scattering effects and, therefore, has a limited validity depending on the level of scattering within the canopy. The fact that the scattering from large forest components such as branches and trunks is significant at L-band requires that zero-order vegetation parameters be evaluated (compared) along with their theoretical definitions to provide a better understanding of these parameters in the retrieval algorithms as applied to trees. This paper compares the effective vegetation opacities, computed from multi-angular pine tree brightness temperature data, against the results of two independent approaches that provide theoretical and measured optical depths. These two techniques are based on forward scattering theory and radar corner reflector measurements, respectively. The results indicate that the effective vegetation opacity values are smaller than but of similar magnitude to both radar and theoretical estimates. The effective opacity of the zero-order model is thus set equal to the theoretical opacity and an explicit expression for the effective albedo is then obtained from the zero- and first- order RT model comparison. The resultant albedo is found to have a similar magnitude as the effective albedo value obtained from brightness temperature measurements. However, it is less than half of that estimated using the theoretical calculations (0.5 - 0.6 for tree canopies at L-band). This lower observed albedo balances the scattering darkening effect of the large theoretical albedo with a first-order multiple-scattering contribution. The retrieved effective albedo is different from theoretical definitions and not the albedo of single forest elements anymore, but it becomes a global parameter, which depends on all the processes taking place within the canopy, including multiple-scattering.
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A
2008-09-01
The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2013-01-01
The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
Angular resolution of stacked resistive plate chambers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samuel, Deepak; Onikeri, Pratibha B.; Murgod, Lakshmi P., E-mail: deepaksamuel@cuk.ac.in, E-mail: pratibhaonikeri@gmail.com, E-mail: lakshmipmurgod@gmail.com
We present here detailed derivations of mathematical expressions for the accuracy in the arrival direction of particles estimated using a set of stacked resistive plate chambers (RPCs). The expressions are validated against experimental results using data collected from the prototype detectors (without magnet) of the upcoming India-based Neutrino Observatory (INO). We also present a theoretical estimate of angular resolution of such a setup. In principle, these expressions can be used for any other detector with an architecture similar to that of RPCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Exploring super-Gaussianity toward robust information-theoretical time delay estimation.
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee
2013-03-01
Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.
Li, Zhenghan; Li, Xinyang
2018-04-30
Real time transverse wind estimation contributes to predictive correction which is used to compensate for the time delay error in the control systems of adaptive optics (AO) system. Many methods that apply Shack-Hartmann wave-front sensor to wind profile measurement have been proposed. One of the obvious problems is the lack of a fundamental benchmark to compare the various methods. In this work, we present the fundamental performance limits for transverse wind estimator from Shack-Hartmann wave-front sensor measurements using Cramér-Rao lower bound (CRLB). The bound provides insight into the nature of the transverse wind estimation, thereby suggesting how to design and improve the estimator in the different application scenario. We analyze the theoretical bound and find that factors such as slope measurement noise, wind velocity and atmospheric coherence length r 0 have important influence on the performance. Then, we introduced the non-iterative gradient-based transverse wind estimator. The source of the deterministic bias of the gradient-based transverse wind estimators is analyzed for the first time. Finally, we derived biased CRLB for the gradient-based transverse wind estimators from Shack-Hartmann wave-front sensor measurements and the bound can predict the performance of estimator more accurately.
Resonator design and performance estimation for a space-based laser transmitter
NASA Astrophysics Data System (ADS)
Agrawal, Lalita; Bhardwaj, Atul; Pal, Suranjan; Kamalakar, J. A.
2006-12-01
Development of a laser transmitter for space applications is a highly challenging task. The laser must be rugged, reliable, lightweight, compact and energy efficient. Most of these features are inherently achieved by diode pumping of solid state lasers. Overall system reliability can further be improved by appropriate optical design of the laser resonator besides selection of suitable electro-optical and opto-mechanical components. This paper presents the design details and the theoretically estimated performance of a crossed-porro prism based, folded Z-shaped laser resonator. A symmetrically pumped Nd: YAG laser rod of 3 mm diameter and 60 mm length is placed in the gain arm with total input peak power of 1800 W from laser diode arrays. Electro-optical Q-switching is achieved through a combination of a polarizer, a fractional waveplate and LiNbO 3 Q-switch crystal (9 x 9 x 25 mm) placed in the feedback arm. Polarization coupled output is obtained by optimizing azimuth angle of quarter wave plate placed in the gain arm. Theoretical estimation of laser output energy and pulse width has been carried out by varying input power levels and resonator length to analyse the performance tolerances. The designed system is capable of meeting the objective of generating laser pulses of 10 ns duration and 30 mJ energy @ 10 Hz.
On Short-Time Estimation of Vocal Tract Length from Formant Frequencies
Lammert, Adam C.; Narayanan, Shrikanth S.
2015-01-01
Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
Roca, Judith; Reguant, Mercedes; Canet, Olga
2016-11-01
Teaching strategies are essential in order to facilitate meaningful learning and the development of high-level thinking skills in students. To compare three teaching methodologies (problem-based learning, case-based teaching and traditional methods) in terms of the learning outcomes achieved by nursing students. This quasi-experimental research was carried out in the Nursing Degree programme in a group of 74 students who explored the subject of The Oncology Patient through the aforementioned strategies. A performance test was applied based on Bloom's Revised Taxonomy. A significant correlation was found between the intragroup theoretical and theoretical-practical dimensions. Likewise, intergroup differences were related to each teaching methodology. Hence, significant differences were estimated between the traditional methodology (x-=9.13), case-based teaching (x-=12.96) and problem-based learning (x-=14.84). Problem-based learning was shown to be the most successful learning method, followed by case-based teaching and the traditional methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bureau, Alexandre; Duchesne, Thierry
2015-12-01
Splitting extended families into their component nuclear families to apply a genetic association method designed for nuclear families is a widespread practice in familial genetic studies. Dependence among genotypes and phenotypes of nuclear families from the same extended family arises because of genetic linkage of the tested marker with a risk variant or because of familial specificity of genetic effects due to gene-environment interaction. This raises concerns about the validity of inference conducted under the assumption of independence of the nuclear families. We indeed prove theoretically that, in a conditional logistic regression analysis applicable to disease cases and their genotyped parents, the naive model-based estimator of the variance of the coefficient estimates underestimates the true variance. However, simulations with realistic effect sizes of risk variants and variation of this effect from family to family reveal that the underestimation is negligible. The simulations also show the greater efficiency of the model-based variance estimator compared to a robust empirical estimator. Our recommendation is therefore, to use the model-based estimator of variance for inference on effects of genetic variants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogorelov, A. A.; Suslov, I. M.
2008-06-15
New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less
Conceptual Challenges in Coordinating Theoretical and Data-Centered Estimates of Probability
ERIC Educational Resources Information Center
Konold, Cliff; Madden, Sandra; Pollatsek, Alexander; Pfannkuch, Maxine; Wild, Chris; Ziedins, Ilze; Finzer, William; Horton, Nicholas J.; Kazak, Sibel
2011-01-01
A core component of informal statistical inference is the recognition that judgments based on sample data are inherently uncertain. This implies that instruction aimed at developing informal inference needs to foster basic probabilistic reasoning. In this article, we analyze and critique the now-common practice of introducing students to both…
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Methodical Bases for the Regional Information Potential Estimation
ERIC Educational Resources Information Center
Ashmarina, Svetlana I.; Khasaev, Gabibulla R.; Mantulenko, Valentina V.; Kasarin, Stanislav V.; Dorozhkin, Evgenij M.
2016-01-01
The relevance of the investigated problem is caused by the need to assess the implementation of informatization level of the region and the insufficient development of the theoretical, content-technological, scientific and methodological aspects of the assessment of the region's information potential. The aim of the research work is to develop a…
Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm
Sun, Baoliang; Jiang, Chunlan; Li, Ming
2016-01-01
An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271
A time and frequency synchronization method for CO-OFDM based on CMA equalizers
NASA Astrophysics Data System (ADS)
Ren, Kaixuan; Li, Xiang; Huang, Tianye; Cheng, Zhuo; Chen, Bingwei; Wu, Xu; Fu, Songnian; Ping, Perry Shum
2018-06-01
In this paper, an efficient time and frequency synchronization method based on a new training symbol structure is proposed for polarization division multiplexing (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The coarse timing synchronization is achieved by exploiting the correlation property of the first training symbol, and the fine timing synchronization is accomplished by using the time-domain symmetric conjugate of the second training symbol. Furthermore, based on these training symbols, a constant modulus algorithm (CMA) is proposed for carrier frequency offset (CFO) estimation. Theoretical analysis and simulation results indicate that the algorithm has the advantages of robustness to poor optical signal-to-noise ratio (OSNR) and chromatic dispersion (CD). The frequency offset estimation range can achieve [ -Nsc/2 ΔfN , + Nsc/2 ΔfN ] GHz with the mean normalized estimation error below 12 × 10-3 even under the condition of OSNR as low as 10 dB.
A learning framework for age rank estimation based on face images with scattering transform.
Chang, Kuang-Yu; Chen, Chu-Song
2015-03-01
This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
NASA Astrophysics Data System (ADS)
Tsutsumi, Morito; Seya, Hajime
2009-12-01
This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.
Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case
Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique
2017-01-01
This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Amthor, Jeffrey S
2010-12-01
The relationship between solar radiation capture and potential plant growth is of theoretical and practical importance. The key processes constraining the transduction of solar radiation into phyto-energy (i.e. free energy in phytomass) were reviewed to estimate potential solar-energy-use efficiency. Specifically, the out-put:input stoichiometries of photosynthesis and photorespiration in C(3) and C(4) systems, mobilization and translocation of photosynthate, and biosynthesis of major plant biochemical constituents were evaluated. The maintenance requirement, an area of important uncertainty, was also considered. For a hypothetical C(3) grain crop with a full canopy at 30°C and 350 ppm atmospheric [CO(2) ], theoretically potential efficiencies (based on extant plant metabolic reactions and pathways) were estimated at c. 0.041 J J(-1) incident total solar radiation, and c. 0.092 J J(-1) absorbed photosynthetically active radiation (PAR). At 20°C, the calculated potential efficiencies increased to 0.053 and 0.118 J J(-1) (incident total radiation and absorbed PAR, respectively). Estimates for a hypothetical C(4) cereal were c. 0.051 and c. 0.114 J J(-1), respectively. These values, which cannot be considered as precise, are less than some previous estimates, and the reasons for the differences are considered. Field-based data indicate that exceptional crops may attain a significant fraction of potential efficiency. © The Author (2010). Journal compilation © New Phytologist Trust (2010).
Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach
NASA Astrophysics Data System (ADS)
Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.
2016-12-01
Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
NASA Astrophysics Data System (ADS)
Koley, Susmita; Ghosh, Indranil
Quick and periodic inflow-outflow of adsorbate in an adsorbent column createsa differential temperature between the two ends of it, allowing for the generation of continuous sorption cooling in a single adsorbent tube. The concept has been proven experimentally and theoretically for near room temperature applications using activated carbon-nitrogen. The feasibility of generating continuous solid sorption cooling in a single adsorbent tube in the cryogenic domainhas been studied theoretically with a different adsorbent-adsorbate pair, namely, activated carbon-hydrogen. Precooling of gaseous hydrogen (before it enters the adsorbent column) and removal of the heat of adsorption has been achieved using liquid nitrogen. Theoretical estimation shows nearly 20 K temperature difference between the two ends under no load condition. Finally, parametric variations have been performed.
Cost-estimating relationships for space programs
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1992-01-01
Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.
2010-09-01
estimation of total exposure at any toxicological endpoint in the body. This effort is a significant contribution as it highlights future research needs...rigorous modeling of the nanoparticle transport by including physico-chemical properties of engineered particles. Similarly, toxicological dose-response...exposure risks as compared to larger sized particles of the same material. Although the toxicology of a base material may be thoroughly defined, the
Kouloulias, Vassilis; Karanasiou, Irene; Giamalaki, Melina; Matsopoulos, George; Kouvaris, John; Kelekis, Nikolaos; Uzunoglu, Nikolaos
2015-02-01
A hyperthermia system using a folded loop antenna applicator at 27 MHz for soft tissue treatment was investigated both theoretically and experimentally to evaluate its clinical value. The electromagnetic analysis of a 27-MHz folded loop antenna for use in human tissue was based on a customised software tool and led to the design and development of the proposed hyperthermia system. The system was experimentally validated using specific absorption rate (SAR) distribution estimations through temperature distribution measurements of a muscle tissue phantom after electromagnetic exposure. Various scenarios for optimal antenna positioning were also performed. Comparison of the theoretical and experimental analysis results shows satisfactory agreement. The SAR level of 50% reaches 8 cm depth in the tissue phantom. Thus, based on the maximum observed SAR values that were of the order of 100 W/kg, the antenna specified is suitable for deep tumour heating. Theoretical and experimental SAR distribution results as derived from this study are in agreement. The proposed folded loop antenna seems appropriate for use in hyperthermia treatment, achieving proper planning and local treatment of deeply seated affected areas and lesions.
Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L
2017-10-01
Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Empirical evidence for site coefficients in building code provisions
Borcherdt, R.D.
2002-01-01
Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.
Atomistic determination of flexoelectric properties of crystalline dielectrics
NASA Astrophysics Data System (ADS)
Maranganti, R.; Sharma, P.
2009-08-01
Upon application of a uniform strain, internal sublattice shifts within the unit cell of a noncentrosymmetric dielectric crystal result in the appearance of a net dipole moment: a phenomenon well known as piezoelectricity. A macroscopic strain gradient on the other hand can induce polarization in dielectrics of any crystal structure, even those which possess a centrosymmetric lattice. This phenomenon, called flexoelectricity, has both bulk and surface contributions: the strength of the bulk contribution can be characterized by means of a material property tensor called the bulk flexoelectric tensor. Several recent studies suggest that strain-gradient induced polarization may be responsible for a variety of interesting and anomalous electromechanical phenomena in materials including electromechanical coupling effects in nonuniformly strained nanostructures, “dead layer” effects in nanocapacitor systems, and “giant” piezoelectricity in perovskite nanostructures among others. In this work, adopting a lattice dynamics based microscopic approach we provide estimates of the flexoelectric tensor for certain cubic crystalline ionic salts, perovskite dielectrics, III-V and II-VI semiconductors. We compare our estimates with experimental/theoretical values wherever available and also revisit the validity of an existing empirical scaling relationship for the magnitude of flexoelectric coefficients in terms of material parameters. It is interesting to note that two independent groups report values of flexoelectric properties for perovskite dielectrics that are orders of magnitude apart: Cross and co-workers from Penn State have carried out experimental studies on a variety of materials including barium titanate while Catalan and co-workers from Cambridge used theoretical ab initio techniques as well as experimental techniques to study paraelectric strontium titanate as well as ferroelectric barium titanate and lead titanate. We find that, in the case of perovskite dielectrics, our estimates agree to an order of magnitude with the experimental and theoretical estimates for strontium titanate. For barium titanate however, while our estimates agree to an order of magnitude with existing ab initio calculations, there exists a large discrepancy with experimental estimates. The possible reasons for the observed deviations are discussed.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Theoretical studies on a new furazan compound bis[4-nitramino-furazanyl-3-azoxy]azofurazan (ADNAAF).
Zheng, Chunmei; Chu, Yuting; Xu, Liwen; Wang, Fengyun; Lei, Wu; Xia, Mingzhu; Gong, Xuedong
2016-06-01
Bis[4-nitraminofurazanyl-3-azoxy]azofurazan (ADNAAF), synthesized in our previous work [1], contains four furazan units connected to the linkage of the azo-group and azoxy-group. For further research, some theoretical characters were studied by the density functional theoretical (DFT) method. The optimized structures and the energy gaps between the HOMO and LUMO were studied at the B3LYP/6-311++G** level. The isodesmic reaction method was used for estimating the enthalpy of formation. The detonation performances were estimated with Kamlet-Jacobs equations based on the predicted density and enthalpy of formation in the solid state. ADAAF was also calculated by the same method for comparison. It was found that the nitramino group of ADNAAF can elongate the length of adjacent C-N bonds than the amino group of ADAAF. The gas-phase and solid-phase enthalpies of formation of ADNAAF are larger than those of ADAAF. The detonation performances of ADNAAF are better than ADAAF and RDX, and similar to HMX. The trigger bond of ADNAAF is the N-N bonds in the nitramino groups, and the nitramino group is more active than the amino group (-NH2).
ERIC Educational Resources Information Center
Hannon, Brenda; Frias, Sarah
2012-01-01
The present study reports the development of a theoretically motivated measure that provides estimates of a preschooler's ability to recall auditory text, to make text-based inferences, to access knowledge from long-term memory, and to integrate this accessed knowledge with new information from auditory text. This new preschooler component…
Bullying and HIV Risk among High School Teenagers: The Mediating Role of Teen Dating Violence
ERIC Educational Resources Information Center
Okumu, Moses; Mengo, Cecilia; Ombayo, Bernadette; Small, Eusebius
2017-01-01
Background: Teen dating violence (TDV), bullying, and HIV risk behaviors are public health concerns that impact adolescents in the United States. National estimates reveal high rates of these risk behaviors among high school students. Based on theoretical and empirical evidence, we hypothesized that experiencing teen dating violence (sexual and…
Some Simple Solutions to the Problem of Predicting Boundary-Layer Self-Induced Pressures
NASA Technical Reports Server (NTRS)
Bertram, Mitchel H.; Blackstock, Thomas A.
1961-01-01
Simplified theoretical approaches are shown, based on hypersonic similarity boundary-layer theory, which allow reasonably accurate estimates to be made of the surface pressures on plates on which viscous effects are important. The consideration of viscous effects includes the cases where curved surfaces, stream pressure gradients, and leadingedge bluntness are important factors.
Minimum area requirements for an at-risk butterfly based on movement and demography.
Brown, Leone M; Crone, Elizabeth E
2016-02-01
Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.
Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.
Azzari, Lucio; Foi, Alessandro
2014-08-01
We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Fuel Burn Estimation Using Real Track Data
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2011-01-01
A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.
Sliding mode observers for automotive alternator
NASA Astrophysics Data System (ADS)
Chen, De-Shiou
Estimator development for synchronous rectification of the automotive alternator is a desirable approach for estimating alternator's back electromotive forces (EMFs) without a direct mechanical sensor of the rotor position. Recent theoretical studies show that estimation of the back EMF may be observed based on system's phase current model by sensing electrical variables (AC phase currents and DC bus voltage) of the synchronous rectifier. Observer design of the back EMF estimation has been developed for constant engine speed. In this work, we are interested in nonlinear observer design of the back EMF estimation for the real case of variable engine speed. Initial back EMF estimate can be obtained from a first-order sliding mode observer (SMO) based on the phase current model. A fourth-order nonlinear asymptotic observer (NAO), complemented by the dynamics of the back EMF with time-varying frequency and amplitude, is then incorporated into the observer design for chattering reduction. Since the cost of required phase current sensors may be prohibitive, the most applicable approach in real implementation by measuring DC current of the synchronous rectifier is carried out in the dissertation. It is shown that the DC link current consists of sequential "windows" with partial information of the phase currents, hence, the cascaded NAO is responsible not only for the purpose of chattering reduction but also for necessarily accomplishing the process of estimation. Stability analyses of the proposed estimators are considered for most linear and time-varying cases. The stability of the NAO without speed information is substantiated by both numerical and experimental results. Prospective estimation algorithms for the case of battery current measurements are investigated. Theoretical study indicates that the convergence of the proposed LAO may be provided by high gain inputs. Since the order of the LAO/NAO for the battery current case is one order higher than that of the link current measurements, it is hard to find moderate values of the input gains for the real-time sampled-data systems. Technical difficulties in implementation of such high order discrete-time nonlinear estimators have been discussed. Directions of further investigations have been provided.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Zhang, Junwen; Yu, Jianjun; Chi, Nan; Chien, Hung-Chang
2014-08-25
We theoretically and experimentally investigate a time-domain digital pre-equalization (DPEQ) scheme for bandwidth-limited optical coherent communication systems, which is based on feedback of channel characteristics from the receiver-side blind and adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi- modulus algorithms (CMA, MMA). Based on the proposed DPEQ scheme, we theoretically and experimentally study its performance in terms of various channel conditions as well as resolutions for channel estimation, such as filtering bandwidth, taps length, and OSNR. Using a high speed 64-GSa/s DAC in cooperation with the proposed DPEQ technique, we successfully synthesized band-limited 40-Gbaud signals in modulation formats of polarization-diversion multiplexed (PDM) quadrature phase shift keying (QPSK), 8-quadrature amplitude modulation (QAM) and 16-QAM, and significant improvement in both back-to-back and transmission BER performances are also demonstrated.
NASA Astrophysics Data System (ADS)
van Dijk, Albert I. J. M.; Peña-Arancibia, Jorge L.; Wood, Eric F.; Sheffield, Justin; Beck, Hylke E.
2013-05-01
Ideally, a seasonal streamflow forecasting system would ingest skilful climate forecasts and propagate these through calibrated hydrological models initialized with observed catchment conditions. At global scale, practical problems exist in each of these aspects. For the first time, we analyzed theoretical and actual skill in bimonthly streamflow forecasts from a global ensemble streamflow prediction (ESP) system. Forecasts were generated six times per year for 1979-2008 by an initialized hydrological model and an ensemble of 1° resolution daily climate estimates for the preceding 30 years. A post-ESP conditional sampling method was applied to 2.6% of forecasts, based on predictive relationships between precipitation and 1 of 21 climate indices prior to the forecast date. Theoretical skill was assessed against a reference run with historic forcing. Actual skill was assessed against streamflow records for 6192 small (<10,000 km2) catchments worldwide. The results show that initial catchment conditions provide the main source of skill. Post-ESP sampling enhanced skill in equatorial South America and Southeast Asia, particularly in terms of tercile probability skill, due to the persistence and influence of the El Niño Southern Oscillation. Actual skill was on average 54% of theoretical skill but considerably more for selected regions and times of year. The realized fraction of the theoretical skill probably depended primarily on the quality of precipitation estimates. Forecast skill could be predicted as the product of theoretical skill and historic model performance. Increases in seasonal forecast skill are likely to require improvement in the observation of precipitation and initial hydrological conditions.
Estimation of wing nonlinear aerodynamic characteristics at supersonic speeds
NASA Technical Reports Server (NTRS)
Carlson, H. W.; Mack, R. J.
1980-01-01
A computational system for estimation of nonlinear aerodynamic characteristics of wings at supersonic speeds was developed and was incorporated in a computer program. This corrected linearized theory method accounts for nonlinearities in the variation of basic pressure loadings with local surface slopes, predicts the degree of attainment of theoretical leading edge thrust, and provides an estimate of detached leading edge vortex loadings that result when the theoretical thrust forces are not fully realized.
The Biot coefficient for a low permeability heterogeneous limestone
NASA Astrophysics Data System (ADS)
Selvadurai, A. P. S.
2018-04-01
This paper presents the experimental and theoretical developments used to estimate the Biot coefficient for the heterogeneous Cobourg Limestone, which is characterized by its very low permeability. The coefficient forms an important component of the Biot poroelastic model that is used to examine coupled hydro-mechanical and thermo-hydro-mechanical processes in the fluid-saturated Cobourg Limestone. The constraints imposed by both the heterogeneous fabric and its extremely low intact permeability [K \\in (10^{-23},10^{-20}) m2 ] require the development of alternative approaches to estimate the Biot coefficient. Large specimen bench-scale triaxial tests (150 mm diameter and 300 mm long) that account for the scale of the heterogeneous fabric are complemented by results for the volume fraction-based mineralogical composition derived from XRD measurements. The compressibility of the solid phase is based on theoretical developments proposed in the mechanics of multi-phasic elastic materials. An appeal to the theory of multi-phasic elastic solids is the only feasible approach for examining the compressibility of the solid phase. The presence of a number of mineral species necessitates the use of the theories of Voigt, Reuss and Hill along with the theories proposed by Hashin and Shtrikman for developing bounds for the compressibility of the multi-phasic geologic material composing the skeletal fabric. The analytical estimates for the Biot coefficient for the Cobourg Limestone are compared with results for similar low permeability rocks reported in the literature.
Economic communication model set
NASA Astrophysics Data System (ADS)
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Testing Software Development Project Productivity Model
NASA Astrophysics Data System (ADS)
Lipkin, Ilya
Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
NASA Astrophysics Data System (ADS)
Saikia, Banashree
2017-03-01
An overview of predominant theoretical models used for predicting the thermal conductivities of dielectric materials is given. The criteria used for different theoretical models are explained. This overview highlights a unified theory based on temperature-dependent thermal-conductivity theories, and a drifting of the equilibrium phonon distribution function due to normal three-phonon scattering processes causes transfer of phonon momentum to (a) the same phonon modes (KK-S model) and (b) across the phonon modes (KK-H model). Estimates of the lattice thermal conductivities of LiF and Mg2Sn for the KK-H model are presented graphically.
Risk of symptomatic dengue for foreign visitors to the 2014 FIFA World Cup in Brazil
Massad, Eduardo; Wilder-Smith, Annelies; Ximenes, Raphael; Amaku, Marcos; Lopez, Luis Fernandez; Coutinho, Francisco Antonio Bezerra; Coelho, Giovanini Evelim; da Silva, Jarbas Barbosa; Struchiner, Claudio José; Burattini, Marcelo Nascimento
2014-01-01
Brazil will host the FIFA World Cup™, the biggest single-event competition in the world, from June 12-July 13 2014 in 12 cities. This event will draw an estimated 600,000 international visitors. Brazil is endemic for dengue. Hence, attendees of the 2014 event are theoretically at risk for dengue. We calculated the risk of dengue acquisition to non-immune international travellers to Brazil, depending on the football match schedules, considering locations and dates of such matches for June and July 2014. We estimated the average per-capita risk and expected number of dengue cases for each host-city and each game schedule chosen based on reported dengue cases to the Brazilian Ministry of Health for the period between 2010-2013. On the average, the expected number of cases among the 600,000 foreigner tourists during the World Cup is 33, varying from 3-59. Such risk estimates will not only benefit individual travellers for adequate pre-travel preparations, but also provide valuable information for public health professionals and policy makers worldwide. Furthermore, estimates of dengue cases in international travellers during the World Cup can help to anticipate the theoretical risk for exportation of dengue into currently non-infected areas. PMID:24863976
Risk of symptomatic dengue for foreign visitors to the 2014 FIFA World Cup in Brazil.
Massad, Eduardo; Wilder-Smith, Annelies; Ximenes, Raphael; Amaku, Marcos; Lopez, Luis Fernandez; Coutinho, Francisco Antonio Bezerra; Coelho, Giovanini Evelim; Silva, Jarbas Barbosa da; Struchiner, Claudio José; Burattini, Marcelo Nascimento
2014-06-01
Brazil will host the FIFA World Cup™, the biggest single-event competition in the world, from June 12-July 13 2014 in 12 cities. This event will draw an estimated 600,000 international visitors. Brazil is endemic for dengue. Hence, attendees of the 2014 event are theoretically at risk for dengue. We calculated the risk of dengue acquisition to non-immune international travellers to Brazil, depending on the football match schedules, considering locations and dates of such matches for June and July 2014. We estimated the average per-capita risk and expected number of dengue cases for each host-city and each game schedule chosen based on reported dengue cases to the Brazilian Ministry of Health for the period between 2010-2013. On the average, the expected number of cases among the 600,000 foreigner tourists during the World Cup is 33, varying from 3-59. Such risk estimates will not only benefit individual travellers for adequate pre-travel preparations, but also provide valuable information for public health professionals and policy makers worldwide. Furthermore, estimates of dengue cases in international travellers during the World Cup can help to anticipate the theoretical risk for exportation of dengue into currently non-infected areas.
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.
Frederiksen, Kirsten; Deltour, Isabelle; Schüz, Joachim
2012-12-10
Estimating exposure-outcome associations using laterality information on exposure and on outcome is an issue, when estimating associations of mobile phone use and brain tumour risk. The exposure is localized; therefore, a potential risk is expected to exist primarily on the side of the head, where the phone is usually held (ipsilateral exposure), and to a lesser extent at the opposite side of the head (contralateral exposure). Several measures of the associations with ipsilateral and contralateral exposure, dealing with different sampling designs, have been presented in the literature. This paper presents a general framework for the analysis of such studies using a likelihood-based approach in a competing risks model setting. The approach clarifies the implicit assumptions required for the validity of the presented estimators, particularly that in some approaches the risk with contralateral exposure is assumed to be zero. The performance of the estimators is illustrated in a simulation study showing for instance that while in some scenarios there is a loss of statistical power, others - in case of a positive ipsilateral exposure-outcome association - would result in a negatively biased estimate of the contralateral exposure parameter, irrespective of any additional recall bias. In conclusion, our theoretical evaluations and results from the simulation study emphasize the importance of setting up a formal model, which furthermore allows for estimation in more complicated and perhaps more realistic exposure settings, such as taking into account exposure to both sides of the head. Copyright © 2012 John Wiley & Sons, Ltd.
USDA-ARS?s Scientific Manuscript database
A theoretical model for the prediction of biomass concentration under real flue gas emission has been developed. The model considers the CO2 mass transfer rate, the critical SOx concentration and its role on pH based inter-conversion of bicarbonate in model building. The calibration and subsequent v...
A new fictitious domain approach for Stokes equation
NASA Astrophysics Data System (ADS)
Yang, Min
2017-10-01
The purpose of this paper is to present a new fictitious domain approach based on the Nietzsche’s method combining with a penalty method for the Stokes equation. This method allows for an easy and flexible handling of the geometrical aspects. Stability and a priori error estimate are proved. Finally, a numerical experiment is provided to verify the theoretical findings.
Uncertainty Propagation and the Fano Based Infromation Theoretic Method: A Radar Example
2015-02-01
Hogg, “Phase transitions and the search problem by, artificial intellience ”, (an Elsevier journal) volume 81, published in 1996, Pages 1- 15. [39] R...dispersion of the mean mutual information of the estimate is low enough to support the use of the linear approximation. M ut ua l In M uf or m at io n
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.
The global contribution of energy consumption by product exports from China.
Tang, Erzi; Peng, Chong
2017-06-01
This paper presents a model to analyze the mechanism of the global contribution of energy usage by product exports. The theoretical analysis is based on the perspective that contribution estimates should be in relatively smaller sectors in which the production characteristics could be considered, such as the productivity distribution for each sector. Then, we constructed a method to measure the global contribution of energy usage. The simple method to estimate the global contribution is the percentage of goods export volume compared to the GDP as a multiple of total energy consumption, but this method underestimates the global contribution because it ignores the structure of energy consumption and product export in China. According to our measurement method and based on the theoretical analysis, we calculated the global contribution of energy consumption only by industrial manufactured product exports in a smaller sector per industry or manufacturing sector. The results indicated that approximately 42% of the total energy usage in the whole economy for China in 2013 was contributed to foreign regions. Along with the primary products and service export in China, the global contribution of energy consumption for China in 2013 by export was larger than 42% of the total energy usage.
Number of perceptually distinct surface colors in natural scenes.
Marín-Franch, Iván; Foster, David H
2010-09-30
The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.
Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis.
Liu, Jeng-Cheng; Cheng, Yuang-Tung; Hung, Hsien-Sen
2018-01-19
Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method.
Meta-analysis on the effectiveness of team-based learning on medical education in China.
Chen, Minjian; Ni, Chunhui; Hu, Yanhui; Wang, Meilin; Liu, Lu; Ji, Xiaoming; Chu, Haiyan; Wu, Wei; Lu, Chuncheng; Wang, Shouyu; Wang, Shoulin; Zhao, Liping; Li, Zhong; Zhu, Huijuan; Wang, Jianming; Xia, Yankai; Wang, Xinru
2018-04-10
Team-based learning (TBL) has been adopted as a new medical pedagogical approach in China. However, there are no studies or reviews summarizing the effectiveness of TBL on medical education. This study aims to obtain an overall estimation of the effectiveness of TBL on outcomes of theoretical teaching of medical education in China. We retrieved the studies from inception through December, 2015. Chinese National Knowledge Infrastructure, Chinese Biomedical Literature Database, Chinese Wanfang Database, Chinese Scientific Journal Database, PubMed, EMBASE and Cochrane Database were searched. The quality of included studies was assessed by the Newcastle-Ottawa scale. Standardized mean difference (SMD) was applied for the estimation of the pooled effects. Heterogeneity assumption was detected by I 2 statistics, and was further explored by meta-regression analysis. A total of 13 articles including 1545 participants eventually entered into the meta-analysis. The quality scores of these studies ranged from 6 to 10. Altogether, TBL significantly increased students' theoretical examination scores when compared with lecture-based learning (LBL) (SMD = 2.46, 95% CI: 1.53-3.40). Additionally, TBL significantly increased students' learning attitude (SMD = 3.23, 95% CI: 2.27-4.20), and learning skill (SMD = 2.70, 95% CI: 1.33-4.07). The meta-regression results showed that randomization, education classification and gender diversity were the factors that caused heterogeneity. TBL in theoretical teaching of medical education seems to be more effective than LBL in improving the knowledge, attitude and skill of students in China, providing evidence for the implement of TBL in medical education in China. The medical schools should implement TBL with the consideration on the practical teaching situations such as students' education level.
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-01-01
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.
2000-01-01
Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
NASA Astrophysics Data System (ADS)
Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis
2017-08-01
The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.
NASA Astrophysics Data System (ADS)
Taasti, Vicki T.; Michalak, Gregory J.; Hansen, David C.; Deisher, Amanda J.; Kruse, Jon J.; Krauss, Bernhard; Muren, Ludvig P.; Petersen, Jørgen B. B.; McCollough, Cynthia H.
2018-01-01
Dual energy CT (DECT) has been shown, in theoretical and phantom studies, to improve the stopping power ratio (SPR) determination used for proton treatment planning compared to the use of single energy CT (SECT). However, it has not been shown that this also extends to organic tissues. The purpose of this study was therefore to investigate the accuracy of SPR estimation for fresh pork and beef tissue samples used as surrogates of human tissues. The reference SPRs for fourteen tissue samples, which included fat, muscle and femur bone, were measured using proton pencil beams. The tissue samples were subsequently CT scanned using four different scanners with different dual energy acquisition modes, giving in total six DECT-based SPR estimations for each sample. The SPR was estimated using a proprietary algorithm (syngo.via DE Rho/Z Maps, Siemens Healthcare, Forchheim, Germany) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table. The mean and standard deviation of the SPR over large volume-of-interests were calculated. For the six different DECT acquisition methods, the root-mean-square errors (RMSEs) for the SPR estimates over all tissue samples were between 0.9% and 1.5%. For the SECT-based SPR estimation the RMSE was 2.8%. For one DECT acquisition method, a positive bias was seen in the SPR estimates, having a mean error of 1.3%. The largest errors were found in the very dense cortical bone from a beef femur. This study confirms the advantages of DECT-based SPR estimation although good results were also obtained using SECT for most tissues.
Design flood hydrograph estimation procedure for small and fully-ungauged basins
NASA Astrophysics Data System (ADS)
Grimaldi, S.; Petroselli, A.
2013-12-01
The Rational Formula is the most applied equation in practical hydrology due to its simplicity and the effective compromise between theory and data availability. Although the Rational Formula is affected by several drawbacks, it is reliable and surprisingly accurate considering the paucity of input information. However, after more than a century, the recent computational, theoretical, and large-scale monitoring progresses compel us to try to suggest a more advanced yet still empirical procedure for estimating peak discharge in small and ungauged basins. In this contribution an alternative empirical procedure (named EBA4SUB - Event Based Approach for Small and Ungauged Basins) based on the common modelling steps: design hyetograph, rainfall excess, and rainfall-runoff transformation, is described. The proposed approach, accurately adapted for the fully-ungauged basin condition, provides a potentially better estimation of the peak discharge, a design hydrograph shape, and, most importantly, reduces the subjectivity of the hydrologist in its application.
A quantitative investigation of the fracture pump-in/flowback test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plahn, S.V.; Nolte, K.G.; Miska, S.
1995-12-31
Fracture closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test where strong indications of fracture closure are rarely seen. Various techniques exist for extracting closure pressure from the flowback pressure response. Unfortunately, these procedures give different estimates for closure pressure and their theoretical bases are not well established. We present results that place the PIFB test on a more solid foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. Based on our simulation results, we propose an interpretation procedure which gives better estimates for closure pressure than existing techniques.« less
Thompson, J K; Dolce, J J
1989-05-01
Thirty-two asymptomatic college females were assessed on multiple aspects of body image. Subjects' estimation of the size of three body sites (waist, hips, thighs) was affected by instructional protocol. Emotional ratings, based on how they "felt" about their body, elicited ratings that were larger than actual and ideal size measures. Size ratings based on rational instructions were no different from actual sizes, but were larger than ideal ratings. There were no differences between actual and ideal sizes. The results are discussed with regard to methodological issues involved in body image research. In addition, a working hypothesis that differentiates affective/emotional from cognitive/rational aspects of body size estimation is offered to complement current theories of body image. Implications of the findings for the understanding of body image and its relationship to eating disorders are discussed.
Model-Based Estimation of Knee Stiffness
Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J.
2013-01-01
During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step towards our ultimate goal of quantifying knee stiffness during gait. PMID:22801482
NASA Astrophysics Data System (ADS)
Melchert, O.; Hartmann, A. K.
2015-02-01
In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.
Voga, G P; Coelho, M G; de Lima, G M; Belchior, J C
2011-04-07
In this paper we report experimental and theoretical studies concerning the thermal behavior of some organotin-Ti(IV) oxides employed as precursors for TiO(2)/SnO(2) semiconducting based composites, with photocatalytic properties. The organotin-TiO(2) supported materials were obtained by chemical reactions of SnBu(3)Cl (Bu = butyl), TiCl(4) with NH(4)OH in ethanol, in order to impregnate organotin oxide in a TiO(2) matrix. A theoretical model was developed to support experimental procedures. The kinetics parameters: frequency factor (A), activation energy, and reaction order (n) can be estimated through artificial intelligence methods. Genetic algorithm, fuzzy logic, and Petri neural nets were used in order to determine the kinetic parameters as a function of temperature. With this in mind, three precursors were prepared in order to obtain composites with Sn/TiO(2) ratios of 0% (1), 15% (2), and 30% (3) in weight, respectively. The thermal behavior of products (1-3) was studied by thermogravimetric experiments in oxygen.
NASA Astrophysics Data System (ADS)
Wu, Yun-jie; Li, Guo-fei
2018-01-01
Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.
Blume, A; Brückner-Bozetti, P; Steinert, T
2018-04-20
The aim of this pilot study was to estimate the share of working time that staff in psychiatric hospitals theoretically spend on obligatory activities, such as training and further education, organizational and documentation tasks as well as statutory lecturing duties without patient contact. A total of 47 physicians, 39 nurses, 34 psychologists and 35 social workers from eight psychiatric hospitals were interviewed. The results reveal that the theoretically remaining time for direct patient contact is low. The ratio of time spent with versus time spent without patient contact was even worse for senior physicians and leading nurses as well as part-time employees; however, all activities without direct contact to patients seemed to be indispensable in terms of quality of treatment and care. Hence, employees in German psychiatric hospitals regularly have to make decisions on which of their duties they prefer to neglect, to which they are actually obligated.
Wolf Attack Probability: A Theoretical Security Measure in Biometric Authentication Systems
NASA Astrophysics Data System (ADS)
Une, Masashi; Otsuka, Akira; Imai, Hideki
This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding “wolves” into the system to be attacked. The “wolf” means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.
Analysis of redox additive-based overcharge protection for rechargeable lithium batteries
NASA Technical Reports Server (NTRS)
Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.
1991-01-01
The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.
Phan, Hoang Vu; Park, Hoon Cheol
2018-04-18
Studies on wing kinematics indicate that flapping insect wings operate at higher angles of attack (AoAs) than conventional rotary wings. Thus, effectively flying an insect-like flapping-wing micro air vehicle (FW-MAV) requires appropriate wing design for achieving low power consumption and high force generation. Even though theoretical studies can be performed to identify appropriate geometric AoAs for a wing for achieving efficient hovering flight, designing an actual wing by implementing these angles into a real flying robot is challenging. In this work, we investigated the wing morphology of an insect-like tailless FW-MAV, which was named KUBeetle, for obtaining high vertical force/power ratio or power loading. Several deformable wing configurations with various vein structures were designed, and their characteristics of vertical force generation and power requirement were theoretically and experimentally investigated. The results of the theoretical study based on the unsteady blade element theory (UBET) were validated with reference data to prove the accuracy of power estimation. A good agreement between estimated and measured results indicated that the proposed UBET model can be used to effectively estimate the power requirement and force generation of an FW-MAV. Among the investigated wing configurations operating at flapping frequencies of 23 Hz to 29 Hz, estimated results showed that the wing with a suitable vein placed outboard exhibited an increase of approximately 23.7% ± 0.5% in vertical force and approximately 10.2% ± 1.0% in force/power ratio. The estimation was supported by experimental results, which showed that the suggested wing enhanced vertical force by approximately 21.8% ± 3.6% and force/power ratio by 6.8% ± 1.6%. In addition, wing kinematics during flapping motion was analyzed to determine the reason for the observed improvement.
Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K
2017-05-01
Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to high. However, given the lack of dietary simulation reporting guidelines, future work could refine the quality tool to harmonize consistency in the reporting of subsequent dietary modeling studies. © 2017 American Society for Nutrition.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
Multifractals embedded in short time series: An unbiased estimation of probability moment
NASA Astrophysics Data System (ADS)
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
Estimating population ecology models for the WWW market: evidence of competitive oligopolies.
de Cabo, Ruth Mateos; Gimeno, Ricardo
2013-01-01
This paper proposes adapting a particle filtering algorithm to model online Spanish real estate and job search market segments based on the Lotka-Volterra competition equations. For this purpose the authors use data on Internet information searches from Google Trends to proxy for market share. Market share evolution estimations are coherent with those observed in Google Trends. The results show evidence of low website incompatibility in the markets analyzed. Competitive oligopolies are most common in such low-competition markets, instead of the monopolies predicted by theoretical ecology models under strong competition conditions.
Computation of transmitted and received B1 fields in magnetic resonance imaging.
Milles, Julien; Zhu, Yue Min; Chen, Nan-Kuei; Panych, Lawrence P; Gimenez, Gérard; Guttmann, Charles R G
2006-05-01
Computation of B1 fields is a key issue for determination and correction of intensity nonuniformity in magnetic resonance images. This paper presents a new method for computing transmitted and received B1 fields. Our method combines a modified MRI acquisition protocol and an estimation technique based on the Levenberg-Marquardt algorithm and spatial filtering. It enables accurate estimation of transmitted and received B1 fields for both homogeneous and heterogeneous objects. The method is validated using numerical simulations and experimental data from phantom and human scans. The experimental results are in agreement with theoretical expectations.
NASA Astrophysics Data System (ADS)
Andrianov, V. M.; Korolevich, M. V.
2015-09-01
Normal vibrational frequencies and absolute IR band intensities of the biologically active steroid phytohormones homobrassinolide and (22S,23S)-homobrassinolide were calculated in the framework of an original approach that combined classical analysis of normal modes using molecular mechanics with quantum-chemical estimation of the absolute intensities. IR absorption bands were interpreted based on a comparison of the experimental and theoretical absorption spectra. The impact of structural differences in the side chains of these molecules on the formation of their IR spectra in the region 1500-950 cm -1 was estimated.
NASA Astrophysics Data System (ADS)
Guo, Shu-Juan; Fu, Xin-Chu
2010-07-01
In this paper, by applying Lasalle's invariance principle and some results about the trace of a matrix, we propose a method for estimating the topological structure of a discrete dynamical network based on the dynamical evolution of the network. The network concerned can be directed or undirected, weighted or unweighted, and the local dynamics of each node can be nonidentical. The connections among the nodes can be all unknown or partially known. Finally, two examples, including a Hénon map and a central network, are illustrated to verify the theoretical results.
Sapra, Mahak; Ugrani, Suraj; Mayya, Y S; Venkataraman, Chandra
2017-08-15
Air-jet atomization of solution into droplets followed by controlled drying is increasingly being used for producing nanoparticles for drug delivery applications. Nanoparticle size is an important parameter that influences the stability, bioavailability and efficacy of the drug. In air-jet atomization technique, dry particle diameters are generally predicted by using solute diffusion models involving the key concept of critical supersaturation solubility ratio (Sc) that dictates the point of crust formation within the droplet. As no reliable method exists to determine this quantity, the present study proposes an aerosol based method to determine Sc for a given solute-solvent system and process conditions. The feasibility has been demonstrated by conducting experiments for stearic acid in ethanol and chloroform as well as for anti-tubercular drug isoniazid in ethanol. Sc values were estimated by combining the experimentally observed particle and droplet diameters with simulations from a solute diffusion model. Important findings of the study were: (i) the measured droplet diameters systematically decreased with increasing precursor concentration (ii) estimated Sc values were 9.3±0.7, 13.3±2.4 and 18±0.8 for stearic acid in chloroform, stearic acid and isoniazid in ethanol respectively (iii) experimental results pointed at the correct interfacial tension pre-factor to be used in theoretical estimates of Sc and (iv) results showed a consistent evidence for the existence of induction time delay between the attainment of theoretical Sc and crust formation. The proposed approach has been validated by testing its predictive power for a challenge concentration against experimental data. The study not only advances spray-drying technique by establishing an aerosol based approach to determine Sc, but also throws considerable light on the interfacial processes responsible for solid-phase formation in a rapidly supersaturating system. Until satisfactory theoretical formulae for predicting CSS are developed, the present approach appears to offer the best option for engineering nanoparticle size through solute diffusion models. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases
NASA Astrophysics Data System (ADS)
Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto
2017-09-01
A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.
Frezzato, Diego; Saielli, Giacomo
2016-03-10
We have investigated the structural and dynamic properties of Xe dissolved in the ionic liquid crystal (ILC) phase of 1-hexadecyl-3-methylimidazolium nitrate using classical molecular dynamics (MD) simulations. Xe is found to be preferentially dissolved within the hydrophobic environment of the alkyl chains rather than in the ionic layers of the smectic phase. The structural parameters and the estimated local diffusion coefficients concerning the short-time motion of Xe are used to parametrize a theoretical model based on the Smoluchowski equation for the macroscopic dynamics across the smectic layers, a feature which cannot be directly obtained from the relatively short MD simulations. This protocol represents an efficient combination of computational and theoretical tools to obtain information on slow processes concerning the permeability and diffusivity of the xenon in smectic ILCs.
Pittmann, T; Steinmetz, H
2016-08-01
Biopolymers, which are made of renewable raw materials and/or biodegradable residual materials present a possible alternative to common plastic. A potential analysis, based on experimental results in laboratory scale and detailed data from German waste water treatment plants, showed that the theoretically possible production of biopolymers in Germany amounts to more than 20% of the 2015 worldwide biopolymer production. In addition a profound estimation regarding all European Union member states showed that theoretically about 115% of the actual worldwide biopolymer production could be produced on European waste water treatment plants. With an upgraded biopolymer production and a theoretically reachable biopolymer proportion of around 60% of the cell dry weight a total of 1,794,656tPHAa or approximately 236% of today's biopolymer production could be produced on waste water treatment plants in the European Union, using primary sludge as raw material only. Copyright © 2016 Elsevier Ltd. All rights reserved.
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements
NASA Astrophysics Data System (ADS)
Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua
2017-10-01
A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
Milewski, Mikolaj; Stinchcomb, Audra L.
2012-01-01
An ability to estimate the maximum flux of a xenobiotic across skin is desirable both from the perspective of drug delivery and toxicology. While there is an abundance of mathematical models describing the estimation of drug permeability coefficients, there are relatively few that focus on the maximum flux. This article reports and evaluates a simple and easy-to-use predictive model for the estimation of maximum transdermal flux of xenobiotics based on three common molecular descriptors: logarithm of octanol-water partition coefficient, molecular weight and melting point. The use of all three can be justified on the theoretical basis of their influence on the solute aqueous solubility and the partitioning into the stratum corneum lipid domain. The model explains 81% of the variability in the permeation dataset comprised of 208 entries and can be used to obtain a quick estimate of maximum transdermal flux when experimental data is not readily available. PMID:22702370
Research on bathymetry estimation by Worldview-2 based with the semi-analytical model
NASA Astrophysics Data System (ADS)
Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.
2015-04-01
South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.
2012-01-01
Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781
Anharmonic quantum contribution to vibrational dephasing.
Barik, Debashis; Ray, Deb Shankar
2004-07-22
Based on a quantum Langevin equation and its corresponding Hamiltonian within a c-number formalism we calculate the vibrational dephasing rate of a cubic oscillator. It is shown that leading order quantum correction due to anharmonicity of the potential makes a significant contribution to the rate and the frequency shift. We compare our theoretical estimates with those obtained from experiments for small diatomics N(2), O(2), and CO.
ERIC Educational Resources Information Center
Lee, Wan-Fung; Bulcock, Jeffrey Wilson
The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…
ERIC Educational Resources Information Center
Frye, Victoria; Bonner, Sebastian; Williams, Kim; Henny, Kirk; Bond, Keosha; Lucy, Debbie; Cupid, Malik; Smith, Stephen; Koblin, Beryl A.
2012-01-01
In the United States, racial disparities in HIV/AIDS are stark. Although African Americans comprise an estimated 14% of the U.S. population, they made up 52% of new HIV cases among adults and adolescents diagnosed in 2009. Heterosexual transmission is now the second leading cause of HIV in the United States. African Americans made up a full…
Theory based scaling of edge turbulence and implications for the scrape-off layer width
NASA Astrophysics Data System (ADS)
Myra, J. R.; Russell, D. A.; Zweben, S. J.
2016-11-01
Turbulence and plasma parameter data from the National Spherical Torus Experiment (NSTX) [Ono et al., Nucl. Fusion 40, 557 (2000)] is examined and interpreted based on various theoretical estimates. In particular, quantities of interest for assessing the role of turbulent transport on the midplane scrape-off layer heat flux width are assessed. Because most turbulence quantities exhibit large scatter and little scaling within a given operation mode, this paper focuses on length and time scales and dimensionless parameters between operational modes including Ohmic, low (L), and high (H) modes using a large NSTX edge turbulence database [Zweben et al., Nucl. Fusion 55, 093035 (2015)]. These are compared with theoretical estimates for drift and interchange rates, profile modification saturation levels, a resistive ballooning condition, and dimensionless parameters characterizing L and H mode conditions. It is argued that the underlying instability physics governing edge turbulence in different operational modes is, in fact, similar, and is consistent with curvature-driven drift ballooning. Saturation physics, however, is dependent on the operational mode. Five dimensionless parameters for drift-interchange turbulence are obtained and employed to assess the importance of turbulence in setting the scrape-off layer heat flux width λq and its scaling. An explicit proportionality of the width λq to the safety factor and major radius (qR) is obtained under these conditions. Quantitative estimates and reduced model numerical simulations suggest that the turbulence mechanism is not negligible in determining λq in NSTX, at least for high plasma current discharges.
Theory based scaling of edge turbulence and implications for the scrape-off layer width
Myra, J. R.; Russell, D. A.; Zweben, S. J.
2016-11-01
Turbulence and plasma parameter data from the National Spherical Torus Experiment (NSTX) is examined and interpreted based on various theoretical estimates. In particular, quantities of interest for assessing the role of turbulent transport on the midplane scrape-off layer heat flux width are assessed. Because most turbulence quantities exhibit large scatter and little scaling within a given operation mode, this paper focuses on length and time scales and dimensionless parameters between operational modes including Ohmic, low (L), and high (H) modes using a large NSTX edge turbulence database. These are compared with theoretical estimates for drift and interchange rates, profile modificationmore » saturation levels, a resistive ballooning condition, and dimensionless parameters characterizing L and H mode conditions. It is argued that the underlying instability physics governing edge turbulence in different operational modes is, in fact, similar, and is consistent with curvature-driven drift ballooning. Saturation physics, however, is dependent on the operational mode. Five dimensionless parameters for drift-interchange turbulence are obtained and employed to assess the importance of turbulence in setting the scrape-off layer heat flux width λ q and its scaling. An explicit proportionality of the width λ q to the safety factor and major radius (qR) is obtained under these conditions. Lastly, quantitative estimates and reduced model numerical simulations suggest that the turbulence mechanism is not negligible in determining λ q in NSTX, at least for high plasma current discharges.« less
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Long-term radon concentrations estimated from 210Po embedded in glass
Lively, R.S.; Steck, D.J.
1993-01-01
Measured surface-alpha activity on glass exposed in radon chambers and houses has a linear correlation to the integrated radon exposure. Experimental results in chambers and houses have been obtained on glass exposed to radon concentrations between 100 Bq m-3 and 9 MBq m-3 for periods of a few days to several years. Theoretical calculations support the experimental results through a model that predicts the fractions of airborne activity that deposit and become embedded or adsorbed. The combination of measured activity and calculated embedded fraction for a given deposition environment can be applied to most indoor areas and produces a better estimate for lifetime radon exposure than estimates based on short-term indoor radon measurements.
NASA Astrophysics Data System (ADS)
Sichevskij, S. G.
2018-01-01
The feasibility of the determination of the physical conditions in star's atmosphere and the parameters of interstellar extinction from broad-band photometric observations in the 300-3000 nm wavelength interval is studied using SDSS and 2MASS data. The photometric accuracy of these surveys is shown to be insufficient for achieving in practice the theoretical possibility of estimating the atmospheric parameters of stars based on ugriz and JHK s photometry exclusively because such determinations result in correlations between the temperature and extinction estimates. The uncertainty of interstellar extinction estimates can be reduced if prior data about the temperature are available. The surveys considered can nevertheless be potentially valuable sources of information about both stellar atmospheric parameters and the interstellar medium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Menichetti, Julia; Graffigna, Guendalina
2016-01-01
The increasing prevalence of chronic conditions among older adults constitutes a major public health problem. Thus, changes in lifestyles are required to prevent secondary conditions and sustain good care practices. While patient engagement received great attention in the last years as key strategy to solve this issue, to date no interventions exist to sustain the engagement of older chronic patients toward their health management. This study describes the design, development, and optimization of PHEinAction , a theoretically-driven intervention program to increase patient engagement in older chronic populations and consequently to foster healthy changes that can help reduce risks of health problems. The development process followed the UK Medical Research Council's (MRC) guidelines and involved selecting the theoretical base for the intervention, identifying the relevant evidence-based literature, and conducting exploratory research to qualitatively evaluate program's feasibility, acceptability, and comprehension. The result was a user-endorsed intervention designed to improve older patients' engagement in health management based on the theoretical framework of the Patient Health Engagement (PHE) model. The intervention program, which emerged from this process, consisted of 2 monthly face-to-face 1-h sessions delivered by a trained facilitator and one brief telephonic consultation, and aimed to facilitate a range of changes for patient engagement (e.g., motivation to change, health information seeking and use, emotional adjustment, health behaviors planning). PHEinAction is the first example of a theoretically-based patient engagement intervention designed for older chronic targets. The intervention program is based on psychological theory and evidence; it facilitates emotional, psychological, and behavioral processes to support patient engagement and lifestyle change and maintenance. It provides estimates of the extent to which it could help high-risk groups engage in effective health management and informs future trials.
Menichetti, Julia; Graffigna, Guendalina
2016-01-01
The increasing prevalence of chronic conditions among older adults constitutes a major public health problem. Thus, changes in lifestyles are required to prevent secondary conditions and sustain good care practices. While patient engagement received great attention in the last years as key strategy to solve this issue, to date no interventions exist to sustain the engagement of older chronic patients toward their health management. This study describes the design, development, and optimization of PHEinAction, a theoretically-driven intervention program to increase patient engagement in older chronic populations and consequently to foster healthy changes that can help reduce risks of health problems. The development process followed the UK Medical Research Council's (MRC) guidelines and involved selecting the theoretical base for the intervention, identifying the relevant evidence-based literature, and conducting exploratory research to qualitatively evaluate program's feasibility, acceptability, and comprehension. The result was a user-endorsed intervention designed to improve older patients' engagement in health management based on the theoretical framework of the Patient Health Engagement (PHE) model. The intervention program, which emerged from this process, consisted of 2 monthly face-to-face 1-h sessions delivered by a trained facilitator and one brief telephonic consultation, and aimed to facilitate a range of changes for patient engagement (e.g., motivation to change, health information seeking and use, emotional adjustment, health behaviors planning). PHEinAction is the first example of a theoretically-based patient engagement intervention designed for older chronic targets. The intervention program is based on psychological theory and evidence; it facilitates emotional, psychological, and behavioral processes to support patient engagement and lifestyle change and maintenance. It provides estimates of the extent to which it could help high-risk groups engage in effective health management and informs future trials. PMID:27695435
Algorithm theoretical basis for GEDI level-4A footprint above ground biomass density.
NASA Astrophysics Data System (ADS)
Kellner, J. R.; Armston, J.; Blair, J. B.; Duncanson, L.; Hancock, S.; Hofton, M. A.; Luthcke, S. B.; Marselis, S.; Tang, H.; Dubayah, R.
2017-12-01
The Global Ecosystem Dynamics Investigation is a NASA Earth-Venture-2 mission that will place a multi-beam waveform lidar instrument on the International Space Station. GEDI data will provide globally representative measurements of vertical height profiles (waveforms) and estimates of above ground carbon stocks throughout the planet's temperate and tropical regions. Here we describe the current algorithm theoretical basis for the L4A footprint above ground biomass data product. The L4A data product is above ground biomass density (AGBD, Mg · ha-1) at the scale of individual GEDI footprints (25 m diameter). Footprint AGBD is derived from statistical models that relate waveform height metrics to field-estimated above ground biomass. The field estimates are from long-term permanent plot inventories in which all free-standing woody plants greater than a diameter size threshold have been identified and mapped. We simulated GEDI waveforms from discrete-return airborne lidar data using the GEDI waveform simulator. We associated height metrics from simulated waveforms with field-estimated AGBD at 61 sites in temperate and tropical regions of North and South America, Europe, Africa, Asia and Australia. We evaluated the ability of empirical and physically-based regression and machine learning models to predict AGBD at the footprint level. Our analysis benchmarks the performance of these models in terms of site and region-specific accuracy and transferability using a globally comprehensive calibration and validation dataset.
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
Lift estimation of Half-Rotating Wing in hovering flight
NASA Astrophysics Data System (ADS)
Wang, X. Y.; Dong, Y. P.; Qiu, Z. Z.; Zhang, Y. Q.; Shan, J. H.
2016-11-01
Half-Rotating Wing (HRW) is a new kind of flapping wing system with rotating flapping instead of oscillating flapping. Estimating approach of hovering lift which generated in hovering flight was important theoretical foundation to design aircraft using HRW. The working principle of HRW based on Half-Rotating Mechanism (HRM) was firstly introduced in this paper. Generating process of lift by HRW was also given. The calculating models of two lift mechanisms for HRW, including Lift of Flow Around Wing (LFAW) and Lift of Flow Dragging Wing (LFDW), were respectively established. The lift estimating model of HRW was further deduced, by which hovering lift for HRW with different angular velocity could be calculated. Case study using XFLOW software simulation indicates that the above estimating method was effective and feasible to predict roughly the hovering lift for a new HRW system.
OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING
Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.
2017-01-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369
Optimal experiment design for magnetic resonance fingerprinting.
Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L
2016-08-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, Thomas E.; Maddalena, Randy L.
2007-01-01
The role of terrestrial vegetation in transferring chemicals from soil and air into specific plant tissues (stems, leaves, roots, etc.) is still not well characterized. We provide here a critical review of plant-to-soil bioconcentration ratio (BCR) estimates based on models and experimental data. This review includes the conceptual and theoretical formulations of the bioconcentration ratio, constructing and calibrating empirical and mathematical algorithms to describe this ratio and the experimental data used to quantify BCRs and calibrate the model performance. We first evaluate the theoretical basis for the BCR concept and BCR models and consider how lack of knowledge and datamore » limits reliability and consistency of BCR estimates. We next consider alternate modeling strategies for BCR. A key focus of this evaluation is the relative contributions to overall uncertainty from model uncertainty versus variability in the experimental data used to develop and test the models. As a case study, we consider a single chemical, hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), and focus on variability of bioconcentration measurements obtained from 81 experiments with different plant species, different plant tissues, different experimental conditions, and different methods for reporting concentrations in the soil and plant tissues. We use these observations to evaluate both the magnitude of experimental variability in plant bioconcentration and compare this to model uncertainty. Among these 81 measurements, the variation of the plant/soil BCR has a geometric standard deviation (GSD) of 3.5 and a coefficient of variability (CV-ratio of arithmetic standard deviation to mean) of 1.7. These variations are significant but low relative to model uncertainties--which have an estimated GSD of 10 with a corresponding CV of 14.« less
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1991-01-01
Reliable estimates of the components of the surface radiation budget are important in studies of ocean-atmosphere interaction, land-atmosphere interaction, ocean circulation and in the validation of radiation schemes used in climate models. The methods currently under consideration must necessarily make certain assumptions regarding both the presence of clouds and their vertical extent. Because of the uncertainties in assumed cloudiness, all these methods involve perhaps unacceptable uncertainties. Here, a theoretical framework that avoids the explicit computation of cloud fraction and the location of cloud base in estimating the surface longwave radiation is presented. Estimates of the global surface downward fluxes and the oceanic surface net upward fluxes were made for four months (April, July, October and January) in 1985 to 1986. These estimates are based on a relationship between cloud radiative forcing at the top of the atmosphere and the surface obtained from a general circulation model. The radiation code is the version used in the UCLA/GLA general circulation model (GCM). The longwave cloud radiative forcing at the top of the atmosphere as obtained from Earth Radiation Budget Experiment (ERBE) measurements is used to compute the forcing at the surface by means of the GCM-derived relationship. This, along with clear-sky fluxes from the computations, yield maps of the downward longwave fluxes and net upward longwave fluxes at the surface. The calculated results are discussed and analyzed. The results are consistent with current meteorological knowledge and explainable on the basis of previous theoretical and observational works; therefore, it can be concluded that this method is applicable as one of the ways to obtain the surface longwave radiation fields from currently available satellite data.
Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles
Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631
Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.
The yield and decay coefficients of exoelectrogenic bacteria in bioelectrochemical systems.
Wilson, Erica L; Kim, Younggy
2016-05-01
In conventional wastewater treatment, waste sludge management and disposal contribute the major cost for wastewater treatment. Bioelectrochemical systems, as a potential alternative for future wastewater treatment and resources recovery, are expected to produce small amounts of waste sludge because exoelectrogenic bacteria grow on anaerobic respiration and form highly populated biofilms on bioanode surfaces. While waste sludge production is governed by the yield and decay coefficient, none of previous studies have quantified these kinetic constants for exoelectrogenic bacteria. For yield coefficient estimation, we modified McCarty's free energy-based model by using the bioanode potential for the free energy of the electron acceptor reaction. The estimated true yield coefficient ranged 0.1 to 0.3 g-VSS (volatile suspended solids) g-COD(-1) (chemical oxygen demand), which is similar to that of most anaerobic microorganisms. The yield coefficient was sensitively affected by the bioanode potential and pH while the substrate and bicarbonate concentrations had relatively minor effects on the yield coefficient. In lab-scale experiments using microbial electrolysis cells, the observed yield coefficient (including the effect of cell decay) was found to be 0.020 ± 0.008 g-VSS g-COD(-1), which is an order of magnitude smaller than the theoretical estimation. Based on the difference between the theoretical and experimental results, the decay coefficient was approximated to be 0.013 ± 0.002 d(-1). These findings indicate that bioelectrochemical systems have potential for future wastewater treatment with reduced waste sludge as well as for resources recovery. Also, the found kinetic information will allow accurate estimation of wastewater treatment performance in bioelectrochemical systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
Theoretical and simulated performance for a novel frequency estimation technique
NASA Technical Reports Server (NTRS)
Crozier, Stewart N.
1993-01-01
A low complexity, open-loop, discrete-time, delay-multiply-average (DMA) technique for estimating the frequency offset for digitally modulated MPSK signals is investigated. A nonlinearity is used to remove the MPSK modulation and generate the carrier component to be extracted. Theoretical and simulated performance results are presented and compared to the Cramer-Rao lower bound (CRLB) for the variance of the frequency estimation error. For all signal-to-noise ratios (SNR's) above threshold, it is shown that the CRLB can essentially be achieved with linear complexity.
Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi
2011-08-01
Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Bieniek, Andrzej
2017-10-01
The paper describe possibilities of energy generation using various rotor types but especially with multi-blade wind engine operates in the areas with unfavourable wind condition. The paper presents also wind energy conversion estimation results presented based on proposed solution of multi-blade wind turbine of outer diameter of 4 m. Based on the wind distribution histogram from the disadvantage wind condition zones (city of Basel) and taking into account design and estimated operating indexes of the considered wind engine rotor an annual energy generation was estimated. Also theoretical energy generation using various types of wind turbines operates at disadvantage wind conditions zones were estimated and compared. The conducted analysis shows that introduction of multi-blade wind rotor instead of the most popular 3- blades or vertical axis rotors results of about 5% better energy generation. Simultaneously there are energy production also at very disadvantages wind condition at wind speed lower then 4 m s-1. Based on considered construction of multi-blade wind engine the rise of rotor mounting height from 10 to 30 m results with more then 300 % better results in terms of electric energy generation.
Dynamics of contact line depinning during droplet evaporation based on thermodynamics.
Yu, Dong In; Kwak, Ho Jae; Doh, Seung Woo; Ahn, Ho Seon; Park, Hyun Sun; Kiyofumi, Moriyama; Kim, Moo Hwan
2015-02-17
For several decades, evaporation phenomena have been intensively investigated for a broad range of applications. However, the dynamics of contact line depinning during droplet evaporation has only been inductively inferred on the basis of experimental data and remains unclear. This study focuses on the dynamics of contact line depinning during droplet evaporation based on thermodynamics. Considering the decrease in the Gibbs free energy of a system with different evaporation modes, a theoretical model was developed to estimate the receding contact angle during contact line depinning as a function of surface conditions. Comparison of experimentally measured and theoretically modeled receding contact angles indicated that the dynamics of contact line depinning during droplet evaporation was caused by the most favorable thermodynamic process encountered during constant contact radius (CCR mode) and constant contact angle (CCA mode) evaporation to rapidly reach an equilibrium state during droplet evaporation.
Stereotype Threat and College Academic Performance: A Latent Variables Approach*
Owens, Jayanti; Massey, Douglas S.
2013-01-01
Stereotype threat theory has gained experimental and survey-based support in helping explain the academic underperformance of minority students at selective colleges and universities. Stereotype threat theory states that minority students underperform because of pressures created by negative stereotypes about their racial group. Past survey-based studies, however, are characterized by methodological inefficiencies and potential biases: key theoretical constructs have only been measured using summed indicators and predicted relationships modeled using ordinary least squares. Using the National Longitudinal Survey of Freshman, this study overcomes previous methodological shortcomings by developing a latent construct model of stereotype threat. Theoretical constructs and equations are estimated simultaneously from multiple indicators, yielding a more reliable, valid, and parsimonious test of key propositions. Findings additionally support the view that social stigma can indeed have strong negative effects on the academic performance of pejoratively stereotyped racial-minority group members, not only in laboratory settings, but also in the real world. PMID:23950616
NASA Astrophysics Data System (ADS)
Li, M.; Jiang, Y. S.
2014-11-01
Micro-Doppler effect is induced by the micro-motion dynamics of the radar target itself or any structure on the target. In this paper, a simplified cone-shaped model for ballistic missile warhead with micro-nutation is established, followed by the theoretical formula of micro-nutation is derived. It is confirmed that the theoretical results are identical to simulation results by using short-time Fourier transform. Then we propose a new method for nutation period extraction via signature maximum energy fitting based on empirical mode decomposition and short-time Fourier transform. The maximum wobble angle is also extracted by distance approximate approach in a small range of wobble angle, which is combined with the maximum likelihood estimation. By the simulation studies, it is shown that these two feature extraction methods are both valid even with low signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
Ramirez-Sandoval, Juan C; Castilla-Peón, Maria F; Gotés-Palazuelos, José; Vázquez-García, Juan C; Wagner, Michael P; Merelo-Arias, Carlos A; Vega-Vega, Olynka; Rincón-Pedrero, Rodolfo; Correa-Rotter, Ricardo
2016-06-01
Ramirez-Sandoval, Juan C., Maria F. Castilla-Peón, José Gotés-Palazuelos, Juan C. Vázquez-García, Michael P. Wagner, Carlos A. Merelo-Arias, Olynka Vega-Vega, Rodolfo Rincón-Pedrero, and Ricardo Correa-Rotter. Bicarbonate values for healthy residents living in cities above 1500 m of altitude: a theoretical model and systematic review. High Alt Med Biol. 17:85-92, 2016.-Plasma bicarbonate (HCO3(-)) concentration is the main value used to assess the metabolic component of the acid-base status. There is limited information regarding plasma HCO3(-) values adjusted for altitude for people living in cities at high altitude defined as 1500 m (4921 ft) or more above sea level. Our aim was to estimate the plasma HCO3(-) concentration in residents of cities at these altitudes using a theoretical model and compare these values with HCO3(-) values found on a systematic review, and with those venous CO2 values obtained in a sample of 633 healthy individuals living at an altitude of 2240 m (7350 ft). We calculated the PCO2 using linear regression models and calculated plasma HCO3(-) according to the Henderson-Hasselbalch equation. Results show that HCO3(-) concentration falls as the altitude of the cities increase. For each 1000 m of altitude above sea level, HCO3(-) decreases to 0.55 and 1.5 mEq/L in subjects living at sea level with acute exposure to altitude and in subjects acclimatized to altitude, respectively. Estimated HCO3(-) values from the theoretical model were not different to HCO3(-) values found in publications of a systematic review or with venous total CO2 measurements in our sample. Altitude has to be taken into consideration in the calculation of HCO3(-) concentrations in cities above 1500 m to avoid an overdiagnosis of acid-base disorders in a given individual.
Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-01-01
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868
Motion field estimation for a dynamic scene using a 3D LiDAR.
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-09-09
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A.; Maslanik, J.; Key, J.; Weaver, R.; Barry, R.
1990-01-01
The application of multi-spectral satellite data to estimate polar surface energy fluxes is addressed. To what accuracy and over which geographic areas large scale energy budgets can be estimated are investigated based upon a combination of available remote sensing and climatological data sets. The general approach was to: (1) formulate parameterization schemes for the appropriate sea ice energy budget terms based upon the remotely sensed and/or in-situ data sets; (2) conduct sensitivity analyses using as input both natural variability (observed data in regional case studies) and theoretical variability based upon energy flux model concepts; (3) assess the applicability of these parameterization schemes to both regional and basin wide energy balance estimates using remote sensing data sets; and (4) assemble multi-spectral, multi-sensor data sets for at least two regions of the Arctic Basin and possibly one region of the Antarctic. The type of data needed for a basin-wide assessment is described and the temporal coverage of these data sets are determined by data availability and need as defined by parameterization scheme. The titles of the subjects are as follows: (1) Heat flux calculations from SSM/I and LANDSAT data in the Bering Sea; (2) Energy flux estimation using passive microwave data; (3) Fetch and stability sensitivity estimates of turbulent heat flux; and (4) Surface temperature algorithm.
Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J
2015-06-01
Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct measurements. Moreover, we believe that our approach provides a way to cope with the complexities of different ecosystems, provided that sufficient measurements are taken to calibrate our approach to that of the system being studied. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Copur, Hanifi; Bilgin, Nuh; Balci, Cemal; Tumac, Deniz; Avunduk, Emre
2017-06-01
This study aims at determining the effects of single-, double-, and triple-spiral cutting patterns; the effects of tool cutting speeds on the experimental scale; and the effects of the method of yield estimation on cutting performance by performing a set of full-scale linear cutting tests with a conical cutting tool. The average and maximum normal, cutting and side forces; specific energy; yield; and coarseness index are measured and compared in each cutting pattern at a 25-mm line spacing, at varying depths of cut per revolution, and using two cutting speeds on five different rock samples. The results indicate that the optimum specific energy decreases by approximately 25% with an increasing number of spirals from the single- to the double-spiral cutting pattern for the hard rocks, whereas generally little effect was observed for the soft- and medium-strength rocks. The double-spiral cutting pattern appeared to be more effective than the single- or triple-spiral cutting pattern and had an advantage of lower side forces. The tool cutting speed had no apparent effect on the cutting performance. The estimation of the specific energy by the yield based on the theoretical swept area was not significantly different from that estimated by the yield based on the muck weighing, especially for the double- and triple-spiral cutting patterns and with the optimum ratio of line spacing to depth of cut per revolution. This study also demonstrated that the cutterhead and mechanical miner designs, semi-theoretical deterministic computer simulations and empirical performance predictions and optimization models should be based on realistic experimental simulations. Studies should be continued to obtain more reliable results by creating a larger database of laboratory tests and field performance records for mechanical miners using drag tools.
Sun-to-Wheels Exergy Efficiencies for Bio-Ethanol and Photovoltaics.
Williams, Eric; Sekar, Ashok; Matteson, Schuyler; Rittmann, Bruce E
2015-06-02
The two main paths to power vehicles with sunlight are to use photosynthesis to grow biomass, converting to a liquid fuel for an internal combustion engine or to generate photovoltaic electricity that powers the battery of an electric vehicle. While the environmental attributes of these two paths have been much analyzed, prior studies consider the current state of technology. Technologies for biofuel and photovoltaic paths are evolving; it is critical to consider how progress might improve environmental performance. We address this challenge by assessing the current and maximum theoretical exergy efficiencies of bioethanol and photovoltaic sun-to-wheels process chains. The maximum theoretical efficiency is an upper bound stipulated by physical laws. The current net efficiency to produce motive power from silicon photovoltaic modules is estimated at 5.4%, much higher than 0.03% efficiency for corn-based ethanol. Flat-plate photovoltaic panels also have a much higher theoretical maximum efficiency than a C4 crop plant, 48% versus 0.19%. Photovoltaic-based power will always be vastly more efficient than a terrestrial crop biofuel. Providing all mobility in the U.S. via crop biofuels would require 130% of arable land with current technology and 20% in the thermodynamic limit. Comparable values for photovoltaic-based power are 0.7% and 0.081%, respectively.
Does RAIM with Correct Exclusion Produce Unbiased Positions?
Teunissen, Peter J. G.; Imparato, Davide; Tiberius, Christian C. J. M.
2017-01-01
As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely. PMID:28672862
Acoustic classification of zooplankton
NASA Astrophysics Data System (ADS)
Martin Traykovski, Linda V.
1998-11-01
Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Grubb, Anders; Horio, Masaru; Hansson, Lars-Olof; Björk, Jonas; Nyman, Ulf; Flodin, Mats; Larsson, Anders; Bökenkamp, Arend; Yasuda, Yoshinari; Blufpand, Hester; Lindström, Veronica; Zegers, Ingrid; Althaus, Harald; Blirup-Jensen, Søren; Itoh, Yoshi; Sjöström, Per; Nordin, Gunnar; Christensson, Anders; Klima, Horst; Sunde, Kathrin; Hjort-Christensen, Per; Armbruster, David; Ferrero, Carlo
2014-07-01
Many different cystatin C-based equations exist for estimating glomerular filtration rate. Major reasons for this are the previous lack of an international cystatin C calibrator and the nonequivalence of results from different cystatin C assays. Use of the recently introduced certified reference material, ERM-DA471/IFCC, and further work to achieve high agreement and equivalence of 7 commercially available cystatin C assays allowed a substantial decrease of the CV of the assays, as defined by their performance in an external quality assessment for clinical laboratory investigations. By use of 2 of these assays and a population of 4690 subjects, with large subpopulations of children and Asian and Caucasian adults, with their GFR determined by either renal or plasma inulin clearance or plasma iohexol clearance, we attempted to produce a virtually assay-independent simple cystatin C-based equation for estimation of GFR. We developed a simple cystatin C-based equation for estimation of GFR comprising only 2 variables, cystatin C concentration and age. No terms for race and sex are required for optimal diagnostic performance. The equation, [Formula: see text] is also biologically oriented, with 1 term for the theoretical renal clearance of small molecules and 1 constant for extrarenal clearance of cystatin C. A virtually assay-independent simple cystatin C-based and biologically oriented equation for estimation of GFR, without terms for sex and race, was produced. © 2014 The American Association for Clinical Chemistry.
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
Optimally weighted least-squares steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2007-02-01
Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.
The Economic Impact of Domestic Military Installations on Regional Economies.
1979-12-01
to implement the National Environmental Protection Act. The research examined the theoretical basis for impact determination especially economic base...installation on a regional economw. Such impacts ore reuirtd to be estimated to implement the National Environmental Protection Act. The research examined the...Published in the Second Proliminarw Draft Environmental Impact Statement Part I Fort Ord CREF 21]. E. ORGANIZATION OF THE STUDY The background of interest in
Devolatilization Analysis in a Twin Screw Extruder by using the Flow Analysis Network (FAN) Method
NASA Astrophysics Data System (ADS)
Tomiyama, Hideki; Takamoto, Seiji; Shintani, Hiroaki; Inoue, Shigeki
We derived the theoretical formulas for three mechanisms of devolatilization in a twin screw extruder. These are flash, surface refreshment and forced expansion. The method for flash devolatilization is based on the equation of equilibrium concentration which shows that volatiles break off from polymer when they are relieved from high pressure condition. For surface refreshment devolatilization, we applied Latinen's model to allow estimation of polymer behavior in the unfilled screw conveying condition. Forced expansion devolatilization is based on the expansion theory in which foams are generated under reduced pressure and volatiles are diffused on the exposed surface layer after mixing with the injected devolatilization agent. Based on these models, we developed the simulation software of twin-screw extrusion by the FAN method and it allows us to quantitatively estimate volatile concentration and polymer temperature with a high accuracy in the actual multi-vent extrusion process for LDPE + n-hexane.
Exact comprehensive equations for the photon management properties of silicon nanowire
Li, Yingfeng; Li, Meicheng; Li, Ruike; Fu, Pengfei; Wang, Tai; Luo, Younan; Mbengue, Joseph Michel; Trevor, Mwenya
2016-01-01
Unique photon management (PM) properties of silicon nanowire (SiNW) make it an attractive building block for a host of nanowire photonic devices including photodetectors, chemical and gas sensors, waveguides, optical switches, solar cells, and lasers. However, the lack of efficient equations for the quantitative estimation of the SiNW’s PM properties limits the rational design of such devices. Herein, we establish comprehensive equations to evaluate several important performance features for the PM properties of SiNW, based on theoretical simulations. Firstly, the relationships between the resonant wavelengths (RW), where SiNW can harvest light most effectively, and the size of SiNW are formulized. Then, equations for the light-harvesting efficiency at RW, which determines the single-frequency performance limit of SiNW-based photonic devices, are established. Finally, equations for the light-harvesting efficiency of SiNW in full-spectrum, which are of great significance in photovoltaics, are established. Furthermore, using these equations, we have derived four extra formulas to estimate the optimal size of SiNW in light-harvesting. These equations can reproduce majority of the reported experimental and theoretical results with only ~5% error deviations. Our study fills up a gap in quantitatively predicting the SiNW’s PM properties, which will contribute significantly to its practical applications. PMID:27103087
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
Zhai, Haibo; Frey, H Christopher; Rouphail, Nagui M; Gonçalves, Gonçalo A; Farias, Tiago L
2009-08-01
The objective of this research is to evaluate differences in fuel consumption and tailpipe emissions of flexible fuel vehicles (FFVs) operated on ethanol 85 (E85) versus gasoline. Theoretical ratios of fuel consumption and carbon dioxide (CO2) emissions for both fuels are estimated based on the same amount of energy released. Second-by-second fuel consumption and emissions from one FFV Ford Focus fueled with E85 and gasoline were measured under real-world traffic conditions in Lisbon, Portugal, using a portable emissions measurement system (PEMS). Cycle average dynamometer fuel consumption and emission test results for FFVs are available from the U.S. Department of Energy, and emissions certification test results for ethanol-fueled vehicles are available from the U.S. Environmental Protection Agency. On the basis of the PEMS data, vehicle-specific power (VSP)-based modal average fuel and emission rates for both fuels are estimated. For E85 versus gasoline, empirical ratios of fuel consumption and CO2 emissions agree within a margin of error to the theoretical expectations. Carbon monoxide (CO) emissions were found to be typically lower. From the PEMS data, nitric oxide (NO) emissions associated with some higher VSP modes are higher for E85. From the dynamometer and certification data, average hydrocarbon (HC) and nitrogen oxides (NOx) emission differences vary depending on the vehicle. The differences of average E85 versus gasoline emission rates for all vehicle models are -22% for CO, 12% for HC, and -8% for NOx emissions, which imply that replacing gasoline with E85 reduces CO emissions, may moderately decrease NOx tailpipe emissions, and may increase HC tailpipe emissions. On a fuel life cycle basis for corn-based ethanol versus gasoline, CO emissions are estimated to decrease by 18%. Life-cycle total and fossil CO2 emissions are estimated to decrease by 25 and 50%, respectively; however, life-cycle HC and NOx emissions are estimated to increase by 18 and 82%, respectively.
NASA Astrophysics Data System (ADS)
Lian, Tao; Shen, Zheqi; Ying, Jun; Tang, Youmin; Li, Junde; Ling, Zheng
2018-03-01
A new criterion was proposed recently to measure the influence of internal variations on secular trends in a time series. When the magnitude of the trend is greater than a theoretical threshold that scales the influence from internal variations, the sign of the estimated trend can be interpreted as the underlying long-term change. Otherwise, the sign may depend on the period chosen. An improved least squares method is developed here to further reduce the theoretical threshold and is applied to eight sea surface temperature (SST) data sets covering the period 1881-2013 to investigate whether there are robust trends in global SSTs. It is found that the warming trends in the western boundary regions, the South Atlantic, and the tropical and southern-most Indian Ocean are robust. However, robust trends are not found in the North Pacific, the North Atlantic, or the South Indian Ocean. The globally averaged SST and Indian Ocean Dipole indices are found to have robustly increased, whereas trends in the zonal SST gradient across the equatorial Pacific, Niño 3.4 SST, and the Atlantic Multidecadal Oscillation indices are within the uncertainty range associated with internal variations. These results indicate that great care is required when interpreting SST trends using the available records in certain regions and indices. It is worth noting that the theoretical threshold can be strongly influenced by low-frequency oscillations, and the above conclusions are based on the assumption that trends are linear. Caution should be exercised when applying the theoretical threshold criterion to real data.
Bayesian averaging over Decision Tree models for trauma severity scoring.
Schetinin, V; Jakaite, L; Krzanowski, W
2018-01-01
Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
The Shannon entropy as a measure of diffusion in multidimensional dynamical systems
NASA Astrophysics Data System (ADS)
Giordano, C. M.; Cincotta, P. M.
2018-05-01
In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1980-01-01
A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.
CO2 storage capacity estimation: Methodology and gaps
Bachu, S.; Bonijoly, D.; Bradshaw, J.; Burruss, R.; Holloway, S.; Christensen, N.P.; Mathiassen, O.M.
2007-01-01
Implementation of CO2 capture and geological storage (CCGS) technology at the scale needed to achieve a significant and meaningful reduction in CO2 emissions requires knowledge of the available CO2 storage capacity. CO2 storage capacity assessments may be conducted at various scales-in decreasing order of size and increasing order of resolution: country, basin, regional, local and site-specific. Estimation of the CO2 storage capacity in depleted oil and gas reservoirs is straightforward and is based on recoverable reserves, reservoir properties and in situ CO2 characteristics. In the case of CO2-EOR, the CO2 storage capacity can be roughly evaluated on the basis of worldwide field experience or more accurately through numerical simulations. Determination of the theoretical CO2 storage capacity in coal beds is based on coal thickness and CO2 adsorption isotherms, and recovery and completion factors. Evaluation of the CO2 storage capacity in deep saline aquifers is very complex because four trapping mechanisms that act at different rates are involved and, at times, all mechanisms may be operating simultaneously. The level of detail and resolution required in the data make reliable and accurate estimation of CO2 storage capacity in deep saline aquifers practical only at the local and site-specific scales. This paper follows a previous one on issues and development of standards for CO2 storage capacity estimation, and provides a clear set of definitions and methodologies for the assessment of CO2 storage capacity in geological media. Notwithstanding the defined methodologies suggested for estimating CO2 storage capacity, major challenges lie ahead because of lack of data, particularly for coal beds and deep saline aquifers, lack of knowledge about the coefficients that reduce storage capacity from theoretical to effective and to practical, and lack of knowledge about the interplay between various trapping mechanisms at work in deep saline aquifers. ?? 2007 Elsevier Ltd. All rights reserved.
Type-curve estimation of statistical heterogeneity
NASA Astrophysics Data System (ADS)
Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica
2004-04-01
The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.
NASA Technical Reports Server (NTRS)
Bugbee, B.; Monje, O.
1992-01-01
Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.
Saveljev, Vladimir; Kim, Sung-Kyu
2015-10-05
The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.
Determination of fractional flow reserve (FFR) based on scaling laws: a simulation study
NASA Astrophysics Data System (ADS)
Wong, Jerry T.; Molloi, Sabee
2008-07-01
Fractional flow reserve (FFR) provides an objective physiological evaluation of stenosis severity. A technique that can measure FFR using only angiographic images would be a valuable tool in the cardiac catheterization laboratory. To perform this, the diseased blood flow can be measured with a first pass distribution analysis and the theoretical normal blood flow can be estimated from the total coronary arterial volume based on scaling laws. A computer simulation of the coronary arterial network was used to gain a better understanding of how hemodynamic conditions and coronary artery disease can affect blood flow, arterial volume and FFR estimation. Changes in coronary arterial flow and volume due to coronary stenosis, aortic pressure and venous pressure were examined to evaluate the potential use of flow and volume for FFR determination. This study showed that FFR can be estimated using arterial volume and a scaling coefficient corrected for aortic pressure. However, variations in venous pressure were found to introduce some error in FFR estimation. A relative form of FFR was introduced and was found to cancel out the influence of pressure on coronary flow, arterial volume and FFR estimation. The use of coronary flow and arterial volume for FFR determination appears promising.
Risk analysis for autonomous underwater vehicle operations in extreme environments.
Brito, Mario Paulo; Griffiths, Gwyn; Challenor, Peter
2010-12-01
Autonomous underwater vehicles (AUVs) are used increasingly to explore hazardous marine environments. Risk assessment for such complex systems is based on subjective judgment and expert knowledge as much as on hard statistics. Here, we describe the use of a risk management process tailored to AUV operations, the implementation of which requires the elicitation of expert judgment. We conducted a formal judgment elicitation process where eight world experts in AUV design and operation were asked to assign a probability of AUV loss given the emergence of each fault or incident from the vehicle's life history of 63 faults and incidents. After discussing methods of aggregation and analysis, we show how the aggregated risk estimates obtained from the expert judgments were used to create a risk model. To estimate AUV survival with mission distance, we adopted a statistical survival function based on the nonparametric Kaplan-Meier estimator. We present theoretical formulations for the estimator, its variance, and confidence limits. We also present a numerical example where the approach is applied to estimate the probability that the Autosub3 AUV would survive a set of missions under Pine Island Glacier, Antarctica in January-March 2009. © 2010 Society for Risk Analysis.
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin
2011-12-01
Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelbe, David; Oak Ridge National Lab.; van Aardt, Jan
Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in ordermore » to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. Lastly, this paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm.« less
Sabra, Karim G
2010-06-01
It has been demonstrated theoretically and experimentally that an estimate of the Green's function between two receivers can be obtained by cross-correlating acoustic (or elastic) ambient noise recorded at these two receivers. Coherent wavefronts emerge from the noise cross-correlation time function due to the accumulated contributions over time from noise sources whose propagation path pass through both receivers. Previous theoretical studies of the performance of this passive imaging technique have assumed that no relative motion between noise sources and receivers occurs. In this article, the influence of noise sources motion (e.g., aircraft or ship) on this passive imaging technique was investigated theoretically in free space, using a stationary phase approximation, for stationary receivers. The theoretical results were extended to more complex environments, in the high-frequency regime, using first-order expansions of the Green's function. Although sources motion typically degrades the performance of wideband coherent processing schemes, such as time-delay beamforming, it was found that the Green's function estimated from ambient noise cross-correlations are not expected to be significantly affected by the Doppler effect, even for supersonic sources. Numerical Monte-Carlo simulations were conducted to confirm these theoretical predictions for both cases of subsonic and supersonic moving sources.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levitas, Valery I., E-mail: vlevitas@iastate.edu; McCollum, Jena; Pantoya, Michelle L.
2015-09-07
Dilatation of aluminum (Al) core for micron-scale particles covered by alumina (Al{sub 2}O{sub 3}) shell was measured utilizing x-ray diffraction with synchrotron radiation for untreated particles and particles after annealing at 573 K and fast quenching at 0.46 K/s. Such a treatment led to the increase in flame rate for Al + CuO composite by 32% and is consistent with theoretical predictions based on the melt-dispersion mechanism of reaction for Al particles. Experimental results confirmed theoretical estimates and proved that the improvement of Al reactivity is due to internal stresses. This opens new ways of controlling particle reactivity through creating and monitoringmore » internal stresses.« less
Theoretical and Experimental Investigation of Particle Trapping via Acoustic Bubbles
NASA Astrophysics Data System (ADS)
Chen, Yun; Fang, Zecong; Merritt, Brett; Saadat-Moghaddam, Darius; Strack, Dillon; Xu, Jie; Lee, Sungyon
2014-11-01
One important application of lab-on-a-chip devices is the trapping and sorting of micro-objects, with acoustic bubbles emerging as an effective, non-contact method. Acoustically actuated bubbles are known to exert a secondary radiation force on micro-particles and trap them, when this radiation force exceeds the drag force that acts to keep the particles in motion. In this study, we theoretically evaluate the magnitudes of these two forces for varying actuation frequencies and voltages. In particular, the secondary radiation force is calculated directly from bubble oscillation shapes that have been experimentally measured for varying acoustic parameters. Finally, based on the force estimates, we predict the threshold voltage and frequency for trapping and compare them to the experimental results.
Estimating phosphorus availability for microbial growth in an emerging landscape
Schmidt, S.K.; Cleveland, C.C.; Nemergut, D.R.; Reed, S.C.; King, A.J.; Sowell, P.
2011-01-01
Estimating phosphorus (P) availability is difficult—particularly in infertile soils such as those exposed after glacial recession—because standard P extraction methods may not mimic biological acquisition pathways. We developed an approach, based on microbial CO2 production kinetics and conserved carbon:phosphorus (C:P) ratios, to estimate the amount of P available for microbial growth in soils and compared this method to traditional, operationally-defined indicators of P availability. Along a primary succession gradient in the High Andes of Perú, P additions stimulated the growth-related (logistic) kinetics of glutamate mineralization in soils that had been deglaciated from 0 to 5 years suggesting that microbial growth was limited by soil P availability. We then used a logistic model to estimate the amount of C incorporated into biomass in P-limited soils, allowing us to estimate total microbial P uptake based on a conservative C:P ratio of 28:1 (mass:mass). Using this approach, we estimated that there was < 1 μg/g of microbial-available P in recently de-glaciated soils in both years of this study. These estimates fell well below estimates of available soil P obtained using traditional extraction procedures. Our results give both theoretical and practical insights into the kinetics of C and P utilization in young soils, as well as show changes in microbial P availability during early stages of soil development.
Network meta-analysis, electrical networks and graph theory.
Rücker, Gerta
2012-12-01
Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
A Method to Estimate the Masses of Asymptotic Giant Branch Variable Stars
NASA Astrophysics Data System (ADS)
Takeuti, Mine; Nakagawa, Akiharu; Kurayama, Tomoharu; Honma, Mareki
2013-06-01
AGB variable stars are at the transient phase between low and high mass-loss rates; estimating the masses of these stars is necessary to study the evolutionary processes and mass-loss processes during the AGB stage. We applied the pulsation constant theoretically derived by Xiong and Deng (2007 MNRAS, 378, 1270) to 15 galactic AGB stars in order to estimate their masses. We found that using the pulsation constant is effective to estimate the mass of a star pulsating with two different pulsation modes, such as S Crt and RX Boo, which provides mass estimates comparable to theoretical results of AGB star evolution. We also extended the use of the pulsation constant to single-mode variables, and analyzed the properties of AGB stars related to their masses.
Computational control of flexible aerospace systems
NASA Technical Reports Server (NTRS)
Sharpe, Lonnie, Jr.; Shen, Ji Yao
1994-01-01
The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed based on several incomplete versions. The verification of the code had been conducted by comparing the results with those examples for which the exact theoretical solutions can be obtained. The theoretical background of the package and the verification examples has been reported in a technical paper submitted to the Joint Applied Mechanics & Material Conference, ASME. A brief USER'S MANUAL had been compiled, which includes three parts: (1) Input data preparation; (2) Explanation of the Subroutines; and (3) Specification of control variables. Meanwhile, a theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modeling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide an embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.
The effect of time synchronization of wireless sensors on the modal analysis of structures
NASA Astrophysics Data System (ADS)
Krishnamurthy, V.; Fowler, K.; Sazonov, E.
2008-10-01
Driven by the need to reduce the installation cost and maintenance cost of structural health monitoring (SHM) systems, wireless sensor networks (WSNs) are becoming increasingly popular. Perfect time synchronization amongst the wireless sensors is a key factor enabling the use of low-cost, low-power WSNs for structural health monitoring applications based on output-only modal analysis of structures. In this paper we present a theoretical framework for analysis of the impact created by time delays in the measured system response on the reconstruction of mode shapes using the popular frequency domain decomposition (FDD) technique. This methodology directly estimates the change in mode shape values based on sensor synchronicity. We confirm the proposed theoretical model by experimental validation in modal identification experiments performed on an aluminum beam. The experimental validation was performed using a wireless intelligent sensor and actuator network (WISAN) which allows for close time synchronization between sensors (0.6-10 µs in the tested configuration) and guarantees lossless data delivery under normal conditions. The experimental results closely match theoretical predictions and show that even very small delays in output response impact the mode shapes.
Estimating Effects with Rare Outcomes and High Dimensional Covariates: Knowledge is Power
Ahern, Jennifer; Galea, Sandro; van der Laan, Mark
2016-01-01
Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect or association of an exposure on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional mean of the outcome, given the exposure and measured confounders. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including a propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator yielded consistent estimates if either the conditional mean outcome or the propensity score was consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. We applied the estimator to investigate the association between permissive neighborhood drunkenness norms and alcohol use disorder. Our results highlight the potential for double robust, semiparametric efficient estimation with rare events and high dimensional covariates. PMID:28529839
A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes
NASA Astrophysics Data System (ADS)
Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac
2012-11-01
In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.
Information analysis of hyperspectral images from the hyperion satellite
NASA Astrophysics Data System (ADS)
Puzachenko, Yu. G.; Sandlersky, R. B.; Krenke, A. N.; Puzachenko, M. Yu.
2017-07-01
A new method of estimating the outgoing radiation spectra data obtained from the Hyperion EO-1 satellite is considered. In theoretical terms, this method is based on the nonequilibrium thermodynamics concept with corresponding estimates of the entropy and the Kullbak information. The obtained information estimates make it possible to assess the effective work of the landscape cover both in general and for its various types and to identify the spectrum ranges primarily responsible for the information increment and, accordingly, for the effective work. The information is measured in the frequency band intervals corresponding to the peaks of solar radiation absorption by different pigments, mesophyll, and water to evaluate the system operation by their synthesis and moisture accumulation. This method is assumed to be effective in investigation of ecosystem functioning by hyperspectral remote sensing.
Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey
2015-12-01
The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.
Interpreting estimates of heritability--a note on the twin decomposition.
Stenberg, Anders
2013-03-01
While most outcomes may in part be genetically mediated, quantifying genetic heritability is a different matter. To explore data on twins and decompose the variation is a classical method to determine whether variation in outcomes, e.g. IQ or schooling, originate from genetic endowments or environmental factors. Despite some criticism, the model is still widely used. The critique is generally related to how estimates of heritability may encompass environmental mediation. This aspect is sometimes left implicit by authors even though its relevance for the interpretation is potentially profound. This short note is an appeal for clarity from authors when interpreting the magnitude of heritability estimates. It is demonstrated how disregarding existing theoretical contributions can easily lead to unnecessary misinterpretations and/or controversies. The key arguments are relevant also for estimates based on data of adopted children or from modern molecular genetics research. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan
2012-10-01
Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.
Matsuzaki, Ryosuke; Tachikawa, Takeshi; Ishizuka, Junya
2018-03-01
Accurate simulations of carbon fiber-reinforced plastic (CFRP) molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult.
NASA Astrophysics Data System (ADS)
Li, Qiang; Argatov, Ivan; Popov, Valentin L.
2018-04-01
A recent paper by Popov, Pohrt and Li (PPL) in Friction investigated adhesive contacts of flat indenters in unusual shapes using numerical, analytical and experimental methods. Based on that paper, we analyze some special cases for which analytical solutions are known. As in the PPL paper, we consider adhesive contact in the Johnson-Kendall-Roberts approximation. Depending on the energy balance, different upper and lower estimates are obtained in terms of certain integral characteristics of the contact area. The special cases of an elliptical punch as well as a system of two circular punches are considered. Theoretical estimations for the first critical force (force at which the detachment process begins) are confirmed by numerical simulations using the adhesive boundary element method. It is shown that simpler approximations for the pull-off force, based both on the Holm radius of contact and the contact area, substantially overestimate the maximum adhesive force.
Murdande, Sharad B; Pikal, Michael J; Shanker, Ravi M; Bogner, Robin H
2010-12-01
To quantitatively assess the solubility advantage of amorphous forms of nine insoluble drugs with a wide range of physico-chemical properties utilizing a previously reported thermodynamic approach. Thermal properties of amorphous and crystalline forms of drugs were measured using modulated differential calorimetry. Equilibrium moisture sorption uptake by amorphous drugs was measured by a gravimetric moisture sorption analyzer, and ionization constants were determined from the pH-solubility profiles. Solubilities of crystalline and amorphous forms of drugs were measured in de-ionized water at 25°C. Polarized microscopy was used to provide qualitative information about the crystallization of amorphous drug in solution during solubility measurement. For three out the nine compounds, the estimated solubility based on thermodynamic considerations was within two-fold of the experimental measurement. For one compound, estimated solubility enhancement was lower than experimental value, likely due to extensive ionization in solution and hence its sensitivity to error in pKa measurement. For the remaining five compounds, estimated solubility was about 4- to 53-fold higher than experimental results. In all cases where the theoretical solubility estimates were significantly higher, it was observed that the amorphous drug crystallized rapidly during the experimental determination of solubility, thus preventing an accurate experimental assessment of solubility advantage. It has been demonstrated that the theoretical approach does provide an accurate estimate of the maximum solubility enhancement by an amorphous drug relative to its crystalline form for structurally diverse insoluble drugs when recrystallization during dissolution is minimal.
Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.
García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A
2017-01-01
A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.
[Theoretical model study about the application risk of high risk medical equipment].
Shang, Changhao; Yang, Fenghui
2014-11-01
Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.
Estimating release of carbon from 1990 and 1991 forest fires in Alaska
NASA Technical Reports Server (NTRS)
Kaisischke, Eric S.; French, Nancy H. F.; Bourgeau-Chavez, Laura L.; Christensen, N. L., Jr.
1995-01-01
An improved method to estimate the amounts of carbon released during fires in the boreal forest zone of Alaska in 1990 and 1991 is described. This method divides the state into 64 distinct physiographic regions and estimates areal extent of five different land covers: two forest types, peat land, tundra, and nonvegetated. The areal extent of each cover type was estimated from a review of topographic maps of each region and observations on the distribution of foreat types within the state. Using previous observations and theoretical models for the two forest types found in interior Alaska, models of biomass accumulation as a function of stand age were developed. Stand age distributions for each region were determined using a statistical distribution based on fire frequency, which was from available long-term historical records. Estimates of the degree of biomass combusted were based on recent field observations as well as research reported in the literature. The location and areal extent of fires in this region for 1990 and 1991 were based on both field observations and analysis of satellite (advanced very high resolution radiometer (AVHRR)) data sets. Estimates of average carbon release for the two study years ranged between 2.54 and 3.00 kg/sq m, which are 2.2 to 2.6 times greater than estimates used in other studies of carbon release through biomass burning in boreal forests. Total average annual carbon release for the two years ranged between 0.012 and 0.018 Pg C/yr, with the lower value resulting from the AVHRR estimates of fire location and area.
Sigmoid function based integral-derivative observer and application to autopilot design
NASA Astrophysics Data System (ADS)
Shao, Xingling; Wang, Honglun; Liu, Jun; Tang, Jun; Li, Jie; Zhang, Xiaoming; Shen, Chong
2017-02-01
To handle problems of accurate signal reconstruction and controller implementation with integral and derivative components in the presence of noisy measurement, motivated by the design principle of sigmoid function based tracking differentiator and nonlinear continuous integral-derivative observer, a novel integral-derivative observer (SIDO) using sigmoid function is developed. The key merit of the proposed SIDO is that it can simultaneously provide continuous integral and differential estimates with almost no drift phenomena and chattering effect, as well as acceptable noise-tolerance performance from output measurement, and the stability is established based on exponential stability and singular perturbation theory. In addition, the effectiveness of SIDO in suppressing drift phenomena and high frequency noises is firstly revealed using describing function and confirmed through simulation comparisons. Finally, the theoretical results on SIDO are demonstrated with application to autopilot design: 1) the integral and tracking estimates are extracted from the sensed pitch angular rate contaminated by nonwhite noises in feedback loop, 2) the PID(proportional-integral-derivative) based attitude controller is realized by adopting the error estimates offered by SIDO instead of using the ideal integral and derivative operator to achieve satisfactory tracking performance under control constraint.
Yamada, Hiroyuki; Inomata, Satoshi; Tanimoto, Hiroshi; Hata, Hiroo; Tonokura, Kenichi
2018-05-01
The effects of Reid vapor pressure (RVP) on refueling emissions and the effects of ethanol 10% (E10) fuel on refueling and evaporative emissions were observed using six cars and seven fuels. The results indicated that refueling emissions can be reproduced by a simple theoretical model in which fuel vapor in the empty space in the tank is pushed out by the refueling process. In this model, the vapor pressures of fuels can be estimated by the Clausius-Clapeyron equation as a function of temperature. We also evaluated E10 fuel in terms of refueling and evaporative emissions, excluding the effect of contamination of ethanol in the canister. E10 fuel had no effect on the refueling emissions in cases without onboard refueling vapor recovery. E10 showed increased permeation emissions in evaporative emissions because of the high permeability of ethanol. And with E10 fuel, breakthrough emissions appeared earlier but broke through slower than normal fuel. Finally, canisters could store more fuel vapor with E10 fuel. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Neitzke, Kurt W.; Guerreiro, Nelson M.
2014-01-01
A design study was completed to explore the theoretical physical capacity (TPC) of the John F. Kennedy International Airport (KJFK) runway system for a northflow configuration assuming impedance-free (to throughput) air traffic control functionality. Individual runways were modeled using an agent-based, airspace simulation tool, the Airspace Concept Evaluation System (ACES), with all runways conducting both departures and arrivals on a first-come first-served (FCFS) scheduling basis. A realistic future flight schedule was expanded to 3.5 times the traffic level of a selected baseline day, September 26, 2006, to provide a steady overdemand state for KJFK runways. Rules constraining departure and arrival operations were defined to reflect physical limits beyond which safe operations could no longer be assumed. Safety buffers to account for all sources of operational variability were not included in the TPC estimate. Visual approaches were assumed for all arrivals to minimize inter-arrival spacing. Parallel runway operations were assumed to be independent based on lateral spacing distances. Resulting time intervals between successive airport operations were primarily constrained by same-runway and then by intersecting-runway spacing requirements. The resulting physical runway capacity approximates a theoretical limit that cannot be exceeded without modifying runway interaction assumptions. Comparison with current KJFK operational limits for a north-flow runway configuration indicates a substantial throughput gap of approximately 48%. This gap may be further analyzed to determine which part may be feasibly bridged through the deployment of advanced systems and procedures, and which part cannot, because it is either impossible or not cost-effective to control. Advanced systems for bridging the throughput gap may be conceptualized and simulated using this same experimental setup to estimate the level of gap closure achieved.
NASA Astrophysics Data System (ADS)
Livshts, Mikhail A.; Khomyakova, Elena; Evtushenko, Evgeniy G.; Lazarev, Vassili N.; Kulemin, Nikolay A.; Semina, Svetlana E.; Generozov, Edward V.; Govorun, Vadim M.
2015-11-01
Exosomes, small (40-100 nm) extracellular membranous vesicles, attract enormous research interest because they are carriers of disease markers and a prospective delivery system for therapeutic agents. Differential centrifugation, the prevalent method of exosome isolation, frequently produces dissimilar and improper results because of the faulty practice of using a common centrifugation protocol with different rotors. Moreover, as recommended by suppliers, adjusting the centrifugation duration according to rotor K-factors does not work for “fixed-angle” rotors. For both types of rotors - “swinging bucket” and “fixed-angle” - we express the theoretically expected proportion of pelleted vesicles of a given size and the “cut-off” size of completely sedimented vesicles as dependent on the centrifugation force and duration and the sedimentation path-lengths. The proper centrifugation conditions can be selected using relatively simple theoretical estimates of the “cut-off” sizes of vesicles. Experimental verification on exosomes isolated from HT29 cell culture supernatant confirmed the main theoretical statements. Measured by the nanoparticle tracking analysis (NTA) technique, the concentration and size distribution of the vesicles after centrifugation agree with those theoretically expected. To simplify this “cut-off”-size-based adjustment of centrifugation protocol for any rotor, we developed a web-calculator.
Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen
2016-08-23
In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér-Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for the indoor localization systems and serves as an indicator for practical system evaluation.
Poisson sampling - The adjusted and unadjusted estimator revisited
Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas
1998-01-01
The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...
Thermodynamic-ensemble independence of solvation free energy.
Chong, Song-Ho; Ham, Sihyun
2015-02-10
Solvation free energy is the fundamental thermodynamic quantity in solution chemistry. Recently, it has been suggested that the partial molar volume correction is necessary to convert the solvation free energy determined in different thermodynamic ensembles. Here, we demonstrate ensemble-independence of the solvation free energy on general thermodynamic grounds. Theoretical estimates of the solvation free energy based on the canonical or grand-canonical ensemble are pertinent to experiments carried out under constant pressure without any conversion.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
NASA Technical Reports Server (NTRS)
Kustas, William P.; Choudhury, Bhaskar J.; Kunkel, Kenneth E.
1989-01-01
Surface-air temperature differences are commonly used in a bulk resistance equation for estimating sensible heat flux (H), which is inserted in the one-dimensional energy balance equation to solve for the latent heat flux (LE) as a residual. Serious discrepancies between estimated and measured LE have been observed for partial-canopy-cover conditions, which are mainly attributed to inappropriate estimates of H. To improve the estimates of H over sparse canopies, one- and two-layer resistance models that account for some of the factors causing poor agreement are developed. The utility of the two models is tested with remotely sensed and micrometeorological data for a furrowed cotton field with 20 percent cover and a dry soil surface. It is found that the one-layer model performs better than the two-layer model when a theoretical bluff-body correction for heat transfer is used instead of an empirical adjustment; otherwise, the two-layer model is better.
NASA Astrophysics Data System (ADS)
Sarradj, Ennes
2010-04-01
Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.
The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?
Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping
2013-01-01
Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.
Beyond Newton's law of cooling - estimation of time since death
NASA Astrophysics Data System (ADS)
Leinbach, Carl
2011-09-01
The estimate of the time since death and, thus, the time of death is strictly that, an estimate. However, the time of death can be an important piece of information in some coroner's cases, especially those that involve criminal or insurance investigations. It has been known almost from the beginning of time that bodies cool after the internal mechanisms such as circulation of the blood stop. A first attempt to link this phenomenon to the determination of the time of death used a crude linear relationship. Towards the end of the nineteenth century, Newton's law of cooling using body temperature data obtained by the coroner was used to make a more accurate estimate. While based on scientific principles and resulting in a better estimate, Newton's law does not really describe the cooling of a non-homogeneous human body. This article will discuss a more accurate model of the cooling process based on the theoretical work of Marshall and Hoare and the laboratory-based statistical work of Claus Henssge. Using DERIVE®6.10 and the statistical work of Henssge, the double exponential cooling formula developed by Marshall and Hoare will be explored. The end result is a tool that can be used in the field by coroner's scene investigators to determine a 95% confidence interval for the time since death and, thus, the time of death.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devaraj, Arun; Prabhakaran, Ramprashad; Joshi, Vineet V.
2016-04-12
The purpose of this document is to provide a theoretical framework for (1) estimating uranium carbide (UC) volume fraction in a final alloy of uranium with 10 weight percent molybdenum (U-10Mo) as a function of final alloy carbon concentration, and (2) estimating effective 235U enrichment in the U-10Mo matrix after accounting for loss of 235U in forming UC. This report will also serve as a theoretical baseline for effective density of as-cast low-enriched U-10Mo alloy. Therefore, this report will serve as the baseline for quality control of final alloy carbon content
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302
Qiao, Baozhen; Schymura, Maria J; Kahn, Amy R
2016-10-01
Population-based cancer survival analyses have traditionally been based on the first primary cancer. Recent studies have brought this practice into question, arguing that varying registry reference dates affect the ability to identify earlier cancers, resulting in selection bias. We used a theoretical approach to evaluate the extent to which the length of registry operations affects the classification of first versus subsequent cancers and consequently survival estimates. Sequence number central was used to classify tumors from the New York State Cancer Registry, diagnosed 2001-2010, as either first primaries (value=0 or 1) or subsequent primaries (≥2). A set of three sequence numbers, each based on an assumed reference year (1976, 1986 or 1996), was assigned to each tumor. Percent of subsequent cancers was evaluated by reference year, cancer site and age. 5-year relative survival estimates were compared under four different selection scenarios. The percent of cancer cases classified as subsequent primaries was 15.3%, 14.3% and 11.2% for reference years 1976, 1986 and 1996, respectively; and varied by cancer site and age. When only the first primary was included, shorter registry operation time was associated with slightly lower 5-year survival estimates. When all primary cancers were included, survival estimates decreased, with the largest decreases seen for the earliest reference year. Registry operation length affected the identification of subsequent cancers, but the overall effect of this misclassification on survival estimates was small. Survival estimates based on all primary cancers were slightly lower, but might be more comparable across registries. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Buonanno, R.; Corsi, C. E.; Pulone, L.; Fusi Pecci, F.; Bellazzini, M.
1998-05-01
A new procedure is described to derive homogeneous relative ages from the Color-Magnitude Diagrams (CMDs) of Galactic globular clusters (GGCs). It is based on the use of a new observable, Delta V(0.05) , namely the difference in magnitude between an arbitrary point on the upper main sequence (V_{+0.05} -the V magnitude of the MS-ridge, 0.05 mag redder than the Main Sequence (MS) Turn-off, (TO)) and the horizontal branch (HB). The observational error associated to Delta V(0.05) is substantially smaller than that of previous age-indicators, keeping the property of being strictly independent of distance and reddening and of being based on theoretical luminosities rather than on still uncertain theoretical temperatures. As an additional bonus, the theoretical models show that Delta V(0.05) has a low dependence on metallicity. Moreover, the estimates of the relative age so obtained are also sufficiently invariant (to within ~ +/- 1 Gyr) with varying adopted models and transformations. Since the difference in the color difference Delta (B-V)_{TO,RGB} (VandenBerg, Bolte and Stetson 1990 -VBS, Sarajedini and Demarque 1990 -SD) remains the most reliable technique to estimate relative cluster ages for clusters where the horizontal part of the HB is not adequately populated, we have used the differential ages obtained via the "vertical" Delta V(0.05) parameter for a selected sample of clusters (with high quality CMDs, well populated HBs, trustworthy calibrations) to perform an empirical calibration of the "horizontal" observable in terms of [Fe/H] and age. A direct comparison with the corresponding calibration derived from the theoretical models reveals the existence of clear-cut discrepancies, which call into question the model scaling with metallicity in the observational planes. Starting from the global sample of considered clusters, we have thus evaluated, within a homogeneous procedure, relative ages for 33 GGCs having different metallicity, HB-morphologies, and galactocentric distances. These new estimates have also been compared with previous latest determinations (Chaboyer, Demarque and Sarajedini 1996, and Richer {et al. } 1996). The distribution of the cluster ages with varying metallicity and galactocentric distance are briefly discussed: (a) there is no direct indication for any evident age-metallicity relationship; (b) there is some spread in age (still partially compatible with the errors), and the largest dispersion is found for intermediate metal-poor clusters; (c) older clusters populate both the inner and the outer regions of the Milky Way, while the younger globulars are present only in the outer regions, but the sample is far too poor to yield conclusive evidences.
Jastram, John D.; Moyer, Douglas; Hyer, Kenneth
2009-01-01
Fluvial transport of sediment into the Chesapeake Bay estuary is a persistent water-quality issue with major implications for the overall health of the bay ecosystem. Accurately and precisely estimating the suspended-sediment concentrations (SSC) and loads that are delivered to the bay, however, remains challenging. Although manual sampling of SSC produces an accurate series of point-in-time measurements, robust extrapolation to unmeasured periods (especially highflow periods) has proven to be difficult. Sediment concentrations typically have been estimated using regression relations between individual SSC values and associated streamflow values; however, suspended-sediment transport during storm events is extremely variable, and it is often difficult to relate a unique SSC to a given streamflow. With this limitation for estimating SSC, innovative approaches for generating detailed records of suspended-sediment transport are needed. One effective method for improved suspended-sediment determination involves the continuous monitoring of turbidity as a surrogate for SSC. Turbidity measurements are theoretically well correlated to SSC because turbidity represents a measure of water clarity that is directly influenced by suspended sediments; thus, turbidity-based estimation models typically are effective tools for generating SSC data. The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency Chesapeake Bay Program and Virginia Department of Environmental Quality, initiated continuous turbidity monitoring on three major tributaries of the bay - the James, Rappahannock, and North Fork Shenandoah Rivers - to evaluate the use of turbidity as a sediment surrogate in rivers that deliver sediment to the bay. Results of this surrogate approach were compared to the traditionally applied streamflow-based approach for estimating SSC. Additionally, evaluation and comparison of these two approaches were conducted for nutrient estimations. Results demonstrate that the application of turbidity-based estimation models provides an improved method for generating a continuous record of SSC, relative to the classical approach that uses streamflow as a surrogate for SSC. Turbidity-based estimates of SSC were found to be more accurate and precise than SSC estimates from streamflow-based approaches. The turbidity-based SSC estimation models explained 92 to 98 percent of the variability in SSC, while streamflow-based models explained 74 to 88 percent of the variability in SSC. Furthermore, the mean absolute error of turbidity-based SSC estimates was 50 to 87 percent less than the corresponding values from the streamflow-based models. Statistically significant differences were detected between the distributions of residual errors and estimates from the two approaches, indicating that the turbidity-based approach yields estimates of SSC with greater precision than the streamflow-based approach. Similar improvements were identified for turbidity-based estimates of total phosphorus, which is strongly related to turbidity because total phosphorus occurs predominantly in particulate form. Total nitrogen estimation models based on turbidity and streamflow generated estimates of similar quality, with the turbidity-based models providing slight improvements in the quality of estimations. This result is attributed to the understanding that nitrogen transport is dominated by dissolved forms that relate less directly to streamflow and turbidity. Improvements in concentration estimation resulted in improved estimates of load. Turbidity-based suspended-sediment loads estimated for the James River at Cartersville, VA, monitoring station exhibited tighter confidence interval bounds and a coefficient of variation of 12 percent, compared with a coefficient of variation of 38 percent for the streamflow-based load.
Finite-Time Stabilization and Adaptive Control of Memristor-Based Delayed Neural Networks.
Wang, Leimin; Shen, Yi; Zhang, Guodong
Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.
Information theoretic quantification of diagnostic uncertainty.
Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T
2012-01-01
Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison
NASA Astrophysics Data System (ADS)
De Domenico, Manlio; Biamonte, Jacob
2016-10-01
Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Rényi q entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First, we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed with appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for instance, to cluster the layers of a multilayer system. By applying this framework to networks corresponding to sites of the human microbiome, we perform hierarchical cluster analysis and recover with high accuracy existing community-based associations. Our results imply that spectral-based statistical inference in complex networks results in demonstrably superior performance as well as a conceptual backbone, filling a gap towards a network information theory.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
Davis, Kevin C; Blitstein, Jonathan L; Evans, W Douglas; Kamyab, Kian
2010-07-21
Prior research supports the notion that parents have the ability to influence their children's decisions regarding sexual behavior. Yet parent-based approaches to curbing teen pregnancy and STDs have been relatively unexplored. The Parents Speak Up National Campaign (PSUNC) is a multimedia campaign that attempts to fill this void by targeting parents of teens to encourage parent-child communication about waiting to have sex. The campaign follows a theoretical framework that identifies cognitions that are targeted in campaign messages and theorized to influence parent-child communication. While a previous experimental study showed PSUNC messages to be effective in increasing parent-child communication, it did not address how these effects manifest through the PSUNC theoretical framework. The current study examines the PSUNC theoretical framework by 1) estimating the impact of PSUNC on specific cognitions identified in the theoretical framework and 2) examining whether those cognitions are indeed associated with parent-child communication Our study consists of a randomized efficacy trial of PSUNC messages under controlled conditions. A sample of 1,969 parents was randomly assigned to treatment (PSUNC exposure) and control (no exposure) conditions. Parents were surveyed at baseline, 4 weeks, 6 months, 12 months, and 18 months post-baseline. Linear regression procedures were used in our analyses. Outcome variables included self-efficacy to communicate with child, long-term outcome expectations that communication would be successful, and norms on appropriate age for sexual initiation. We first estimated multivariable models to test whether these cognitive variables predict parent-child communication longitudinally. Longitudinal change in each cognitive variable was then estimated as a function of treatment condition, controlling for baseline individual characteristics. Norms related to appropriate age for sexual initiation and outcome expectations that communication would be successful were predictive of parent-child communication among both mothers and fathers. Treatment condition mothers exhibited larger changes than control mothers in both of these cognitive variables. Fathers exhibited no exposure effects. Results suggest that within a controlled setting, the "wait until older norm" and long-term outcome expectations were appropriate cognitions to target and the PSUNC media materials were successful in impacting them, particularly among mothers. This study highlights the importance of theoretical frameworks for parent-focused campaigns that identify appropriate behavioral precursors that are both predictive of a campaign's distal behavioral outcome and sensitive to campaign messages.
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
On event-based optical flow detection
Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko
2015-01-01
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470
NASA Technical Reports Server (NTRS)
Filer, Elizabeth D.; Morrison, Clyde A.; Turner, Gregory A.; Barnes, Norman P.
1991-01-01
Results are reported from an experimental study investigating triply ionized holmium in 10 garnets using the point-change model to predict theoretical energy levels and temperature-dependent branching ratios for the 5I7 to 5I8 manifolds for temperatures between 50 and 400 K. Plots were made for the largest lines at 300 K. YScAG was plotted twice, once for each set of X-ray data available. Energy levels are predicted based on theoretical crystal-field parameters, and good agreement to experiment is found. It is suggested that the present set of theoretical crystal-field parameters provides good estimates of the energy levels for the other hosts on which there are no experimental optical data. X-ray and index-of-refraction data are used to evaluate the performance of 10 lasers via a quantum mechanical model to predict the position of the energy levels and the temperature-dependent branching rations of the 5I7 to 5I8 levels of holmium. The fractional population inversion required for threshold is also evaluated.
Hybrid rocket engine, theoretical model and experiment
NASA Astrophysics Data System (ADS)
Chelaru, Teodor-Viorel; Mingireanu, Florin
2011-06-01
The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.
Theoretical relationship between elastic wave velocity and electrical resistivity
NASA Astrophysics Data System (ADS)
Lee, Jong-Sub; Yoon, Hyung-Koo
2015-05-01
Elastic wave velocity and electrical resistivity have been commonly applied to estimate stratum structures and obtain subsurface soil design parameters. Both elastic wave velocity and electrical resistivity are related to the void ratio; the objective of this study is therefore to suggest a theoretical relationship between the two physical parameters. Gassmann theory and Archie's equation are applied to propose a new theoretical equation, which relates the compressional wave velocity to shear wave velocity and electrical resistivity. The piezo disk element (PDE) and bender element (BE) are used to measure the compressional and shear wave velocities, respectively. In addition, the electrical resistivity is obtained by using the electrical resistivity probe (ERP). The elastic wave velocity and electrical resistivity are recorded in several types of soils including sand, silty sand, silty clay, silt, and clay-sand mixture. The appropriate input parameters are determined based on the error norm in order to increase the reliability of the proposed relationship. The predicted compressional wave velocities from the shear wave velocity and electrical resistivity are similar to the measured compressional velocities. This study demonstrates that the new theoretical relationship may be effectively used to predict the unknown geophysical property from the measured values.
Nonlinear Optical Properties and Applications of Polydiacetylene
NASA Technical Reports Server (NTRS)
Abdeldayem, Hossin; Paley, Mark S.; Witherow, William K.; Frazier, Donald O.
2000-01-01
Recently, we have demonstrated a picosecond all-optical switch, which also functions as a partial all-optical NAND logic gate using a novel polydiacetylene that is synthesized in our laboratory. The nonlinear optical properties of the polydiacetylene material are measured using the Z-scan technique. A theoretical model based on a three level system is investigated and the rate equations of the system are solved. The theoretical calculations are proven to match nicely with the experimental results. The absorption cross-sections for both the first and higher excited states are estimated. The analyses also show that the material suffers a photochemical change beyond a certain level of the laser power and its physical properties suffer radical changes. These changes are the cause for the partial NAND gate function and the switching mechanism.
Microtremors study applying the SPAC method in Colima state, Mexico.
NASA Astrophysics Data System (ADS)
Vázquez Rosas, R.; Aguirre González, J.; Mijares Arellano, H.
2007-05-01
One of the main parts of seismic risk studies is to determine the site effect. This can be estimated by means of the microtremors measurements. From the H/V spectral ratio (Nakamura, 1989), the predominant period of the site can be estimated. Although the predominant period by itself can not represent the site effect in a wide range of frequencies and doesn't provide information of the stratigraphy. The SPAC method (Spatial Auto-Correlation Method, Aki 1957), on the other hand, is useful to estimate the stratigraphy of the site. It is based on the simultaneous recording of microtremors in several stations deployed in an instrumental array. Through the spatial autocorrelation coefficient computation, the Rayleigh wave dispersion curve can be cleared. Finally the stratigraphy model (thickness, S and P wave velocity, and density of each layer) is estimated by fitting the theoretical dispersion curve with the observed one. The theoretical dispersion curve is initially computed using a proposed model. That model is modified several times until the theoretical curve fit the observations. This method requires of a minimum of three stations where the microtremors are observed simultaneously in all the stations. We applied the SPAC method to six sites in Colima state, Mexico. Those sites are Santa Barbara, Cerro de Ortega, Tecoman, Manzanillo and two in Colima city. Totally 16 arrays were carried out using equilateral triangles with different apertures with a minimum of 5 m and a maximum of 60 m. For recording microtremors we used short period (5 seconds) velocity type vertical sensors connected to a K2 (Kinemetrics) acquisition system. We could estimate the velocities of the most superficial layers reaching different depths in each site. For Santa Bárbara site the exploration depth was about 30 m, for Tecoman 12 m, for Manzanillo 35 m, for Cerro de Ortega 68 m, and the deepest site exploration was obtained in Colima city with a depth of around 73 m. The S wave velocities fluctuate between 230 m/s and 420 m/s for the most superficial layer. It means that, in general, the most superficial layers are quite competent. The superficial layer with smaller S wave velocity was observed in Tecoman, while that of largest S wave velocity was observed in Cerro de Ortega. Our estimations are consistent with down-hole velocity records obtained in Santa Barbara by previous studies.
A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations
NASA Astrophysics Data System (ADS)
Zhang, Guoyu; Huang, Chengming; Li, Meng
2018-04-01
We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.
Bockman, Alexander; Fackler, Cameron; Xiang, Ning
2015-04-01
Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.
The effect of S-wave arrival times on the accuracy of hypocenter estimation
Gomberg, J.S.; Shedlock, K.M.; Roecker, S.W.
1990-01-01
We have examined the theoretical basis behind some of the widely accepted "rules of thumb' for obtaining accurate hypocenter estimates that pertain to the use of S phases and illustrate, in a variety of ways, why and when these "rules' are applicable. Most methods used to determine earthquake hypocenters are based on iterative, linearized, least-squares algorithms. We examine the influence of S-phase arrival time data on such algorithms by using the program HYPOINVERSE with synthetic datasets. We conclude that a correctly timed S phase recorded within about 1.4 focal depth's distance from the epicenter can be a powerful constraint on focal depth. Furthermore, we demonstrate that even a single incorrectly timed S phase can result in depth estimates and associated measures of uncertainty that are significantly incorrect. -from Authors
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
Automatic indexing of compound words based on mutual information for Korean text retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan Koo Kim; Yoo Kun Cho
In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.
2017-07-01
Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.
NASA Astrophysics Data System (ADS)
Veneziano, D.; Langousis, A.; Lepore, C.
2009-12-01
The annual maximum of the average rainfall intensity in a period of duration d, Iyear(d), is typically assumed to have generalized extreme value (GEV) distribution. The shape parameter k of that distribution is especially difficult to estimate from either at-site or regional data, making it important to constraint k using theoretical arguments. In the context of multifractal representations of rainfall, we observe that standard theoretical estimates of k from extreme value (EV) and extreme excess (EE) theories do not apply, while estimates from large deviation (LD) theory hold only for very small d. We then propose a new theoretical estimator based on fitting GEV models to the numerically calculated distribution of Iyear(d). A standard result from EV and EE theories is that k depends on the tail behavior of the average rainfall in d, I(d). This result holds if Iyear(d) is the maximum of a sufficiently large number n of variables, all distributed like I(d); therefore its applicability hinges on whether n = 1yr/d is large enough and the tail of I(d) is sufficiently well known. One typically assumes that at least for small d the former condition is met, but poor knowledge of the upper tail of I(d) remains an obstacle for all d. In fact, in the case of multifractal rainfall, also the first condition is not met because, irrespective of d, 1yr/d is too small (Veneziano et al., 2009, WRR, in press). Applying large deviation (LD) theory to this multifractal case, we find that, as d → 0, Iyear(d) approaches a GEV distribution whose shape parameter kLD depends on a region of the distribution of I(d) well below the upper tail, is always positive (in the EV2 range), is much larger than the value predicted by EV and EE theories, and can be readily found from the scaling properties of I(d). The scaling properties of rainfall can be inferred also from short records, but the limitation remains that the result holds under d → 0 not for finite d. Therefore, for different reasons, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.
Inference for High-dimensional Differential Correlation Matrices *
Cai, T. Tony; Zhang, Anru
2015-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed. PMID:26500380
DOA estimation of noncircular signals for coprime linear array via locally reduced-dimensional Capon
NASA Astrophysics Data System (ADS)
Zhai, Hui; Zhang, Xiaofei; Zheng, Wang
2018-05-01
We investigate the issue of direction of arrival (DOA) estimation of noncircular signals for coprime linear array (CLA). The noncircular property enhances the degree of freedom and improves angle estimation performance, but it leads to a more complex angle ambiguity problem. To eliminate ambiguity, we theoretically prove that the actual DOAs of noncircular signals can be uniquely estimated by finding the coincide results from the two decomposed subarrays based on the coprimeness. We propose a locally reduced-dimensional (RD) Capon algorithm for DOA estimation of noncircular signals for CLA. The RD processing is used in the proposed algorithm to avoid two dimensional (2D) spectral peak search, and coprimeness is employed to avoid the global spectral peak search. The proposed algorithm requires one-dimensional locally spectral peak search, and it has very low computational complexity. Furthermore, the proposed algorithm needs no prior knowledge of the number of sources. We also derive the Crámer-Rao bound of DOA estimation of noncircular signals in CLA. Numerical simulation results demonstrate the effectiveness and superiority of the algorithm.
Centler, Florian; Heße, Falk; Thullner, Martin
2013-09-01
At field sites with varying redox conditions, different redox-specific microbial degradation pathways contribute to total contaminant degradation. The identification of pathway-specific contributions to total contaminant removal is of high practical relevance, yet difficult to achieve with current methods. Current stable-isotope-fractionation-based techniques focus on the identification of dominant biodegradation pathways under constant environmental conditions. We present an approach based on dual stable isotope data to estimate the individual contributions of two redox-specific pathways. We apply this approach to carbon and hydrogen isotope data obtained from reactive transport simulations of an organic contaminant plume in a two-dimensional aquifer cross section to test the applicability of the method. To take aspects typically encountered at field sites into account, additional simulations addressed the effects of transverse mixing, diffusion-induced stable-isotope fractionation, heterogeneities in the flow field, and mixing in sampling wells on isotope-based estimates for aerobic and anaerobic pathway contributions to total contaminant biodegradation. Results confirm the general applicability of the presented estimation method which is most accurate along the plume core and less accurate towards the fringe where flow paths receive contaminant mass and associated isotope signatures from the core by transverse dispersion. The presented method complements the stable-isotope-fractionation-based analysis toolbox. At field sites with varying redox conditions, it provides a means to identify the relative importance of individual, redox-specific degradation pathways. © 2013.
Spatio-temporal Granger causality: a new framework
Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng
2015-01-01
That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924
Electrical Power Generated from Tidal Currents and Delivered to USCG Station Eastport, ME
2011-01-21
35 Theory of Operation The ORPC Pre-Commercial Beta Turbine Generator Unit (“Beta TGU”) uses a hydrokinetic cross flow turbine based on Darrieus ...development in the wind turbine industry. The power coefficient (a measure of energy extraction effectiveness) is defined as follows: 31 2 turbine ...stream area of the device. Axial flow wind turbines have demonstrated power coefficients to an estimated 48% which approaches the theoretical “Betz
Spring roll dielectric elastomer actuators for a portable force feedback glove
NASA Astrophysics Data System (ADS)
Zhang, Rui; Lochmatter, Patrick; Kunz, Andreas; Kovacs, Gabor
2006-03-01
Miniature spring roll dielectric elastomer actuators for a novel kinematic-free force feedback concept were manufactured and experimentally characterized. The actuators exhibited a maximum blocking force of 7.2 N and a displacement of 5 mm. The theoretical considerations based on the material's incompressibility were discussed in order to estimate the actuator behavior under blocked-strain activation and free-strain activation. One prototype was built for the demonstration of the proposed force feedback concept.
A Game Theoretic Framework for Power Control in Wireless Sensor Networks (POSTPRINT)
2010-02-01
which the sensor nodes compute based on past observations. Correspondingly, Pe can only be estimated; for example, with a noncoherent FSK modula...bit error probability for the link (i ! j) is given by some inverse function of j. For example, with noncoherent FSK modulation scheme, Pe ¼ 0:5e j...show the results for two different modulation schemes: DPSK and noncoherent PSK. As expected, with improvement in channel condition, i.e., with increase
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
2016-01-01
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864
NASA Astrophysics Data System (ADS)
Gao, Shengguo; Zhu, Zhongli; Liu, Shaomin; Jin, Rui; Yang, Guangchao; Tan, Lei
2014-10-01
Soil moisture (SM) plays a fundamental role in the land-atmosphere exchange process. Spatial estimation based on multi in situ (network) data is a critical way to understand the spatial structure and variation of land surface soil moisture. Theoretically, integrating densely sampled auxiliary data spatially correlated with soil moisture into the procedure of spatial estimation can improve its accuracy. In this study, we present a novel approach to estimate the spatial pattern of soil moisture by using the BME method based on wireless sensor network data and auxiliary information from ASTER (Terra) land surface temperature measurements. For comparison, three traditional geostatistic methods were also applied: ordinary kriging (OK), which used the wireless sensor network data only, regression kriging (RK) and ordinary co-kriging (Co-OK) which both integrated the ASTER land surface temperature as a covariate. In Co-OK, LST was linearly contained in the estimator, in RK, estimator is expressed as the sum of the regression estimate and the kriged estimate of the spatially correlated residual, but in BME, the ASTER land surface temperature was first retrieved as soil moisture based on the linear regression, then, the t-distributed prediction interval (PI) of soil moisture was estimated and used as soft data in probability form. The results indicate that all three methods provide reasonable estimations. Co-OK, RK and BME can provide a more accurate spatial estimation by integrating the auxiliary information Compared to OK. RK and BME shows more obvious improvement compared to Co-OK, and even BME can perform slightly better than RK. The inherent issue of spatial estimation (overestimation in the range of low values and underestimation in the range of high values) can also be further improved in both RK and BME. We can conclude that integrating auxiliary data into spatial estimation can indeed improve the accuracy, BME and RK take better advantage of the auxiliary information compared to Co-OK, and BME outperforms RK by integrating the auxiliary data in a probability form.
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Wave transmission approach based on modal analysis for embedded mechanical systems
NASA Astrophysics Data System (ADS)
Cretu, Nicolae; Nita, Gelu; Ioan Pop, Mihail
2013-09-01
An experimental method for determining the phase velocity in small solid samples is proposed. The method is based on measuring the resonant frequencies of a binary or ternary solid elastic system comprising the small sample of interest and a gauge material of manageable size. The wave transmission matrix of the combined system is derived and the theoretical values of its eigenvalues are used to determine the expected eigenfrequencies that, equated with the measured values, allow for the numerical estimation of the phase velocities in both materials. The known phase velocity of the gauge material is then used to asses the accuracy of the method. Using computer simulation and the experimental values for phase velocities, the theoretical values for the eigenfrequencies of the eigenmodes of the embedded elastic system are obtained, to validate the method. We conclude that the proposed experimental method may be reliably used to determine the elastic properties of small solid samples whose geometries do not allow a direct measurement of their resonant frequencies.
NASA Astrophysics Data System (ADS)
Li, Chao-Ying; Liu, Shi-Fei; Fu, Jin-Xian
2015-11-01
High-order perturbation formulas for a 3d9 ion in rhombically elongated octahedral was applied to calculate the electron paramagnetic resonance (EPR) parameters (the g factors, gi, and the hyperfine structure constants Ai, i = x, y, z) of the rhombic Cu2+ center in CoNH4PO4.6H2O. In the calculations, the required crystal-field parameters are estimated from the superposition model which enables correlation of the crystal-field parameters and hence the EPR parameters with the local structure of the rhombic Cu2+ center. Based on the calculations, the ligand octahedral (i.e. [Cu(H2O)6]2+ cluster) are found to experience the local bond length variations ΔZ (≈0.213 Å) and δr (≈0.132 Å) along axial and perpendicular directions due to the Jahn-Teller effect. Theoretical EPR parameters based on the above local structure are in good agreement with the observed values; the results are discussed.
Theoretical predictions of anti-corrosive properties of THAM and its derivatives.
Malinowski, Szymon; Jaroszyńska-Wolińska, Justyna; Herbert, Tony
2017-12-04
We present quantum chemical theoretical estimations of the anti-corrosive properties of THAM (tris(hydroxymethyl)aminomethane) and three derivatives that differ in the number of benzene rings: THAM-1 (2-amino-3-hydroxy-2-(hydroxymethyl) propylobenzoate), THAM-2 (2-amino-2-(hydroxymetyl)prapan-1,3-diyldibenzoate) and THAM-3 (2-amino-propan-1,2,3-triyltribenzoate). Fourteen exchange-correlation functionals based on the density functional theory (DFT) were chosen for quantum chemical study of THAM derivatives. The objective was to examine the effect of benzene rings on potential anti-corrosive properties of THAM compounds. The results indicate that the addition of benzene rings in THAM derivatives is likely to significantly enhance electrostatic bonding of a THAM-based coating to a presented metal surface and, thus, its adhesion and long-term effect in corrosion inhibition. Whereas it is clear that all three derivatives appear to be superior in their bonding characteristics to pure THAM, the potential order of merit between the three is less clear, although THAM-3 presents as possibly superior.
Interpolation Inequalities and Spectral Estimates for Magnetic Operators
NASA Astrophysics Data System (ADS)
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.
2012-01-01
Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.
Towards a more consistent picture of isopycnal mixing in climate models
NASA Astrophysics Data System (ADS)
Gnanadesikan, A.; Pradal, M. A. S.; Koszalka, I.; Abernathey, R. P.
2014-12-01
The stirring of tracers by mesoscale eddies along isopycnal surfaces is often represented in coarse-resolution models by the Redi diffusion parameter ARedi. Theoretical treatments of ARedi often assume it should scale as the eddy energy or the growth rate of mesoscale eddies,. producing a picture where it is high in boundary currents and low )of order a few hundred m2/s) in the gyre interiors. However, observational estimates suggest that ARedi should be very large (of order thousands of m2/s) in the gyre interior. We present results of recent simulations comparing a range of spatially constant values ARedi (with values of 400, 800, 1200 and 2400 m2/s) to a spatially resolved estimate based on altimetry and a zonally averaged version of the same estimate. In general, increasing the ARedi coefficient destratifies and warms the high latitudes. Relative to our control simulation, the spatially dependent coefficient is lower in the Southern Ocean, but high in the North Pacific, and so the temperature changes mirror this. We also examine the response of ocean hypoxia to these changes. In general, the zonally averaged version of the altimetry-based estimate of ARedi does not capture the full 2d representation.
Renewable Energy Power Generation Estimation Using Consensus Algorithm
NASA Astrophysics Data System (ADS)
Ahmad, Jehanzeb; Najm-ul-Islam, M.; Ahmed, Salman
2017-08-01
At the small consumer level, Photo Voltaic (PV) panel based grid tied systems are the most common form of Distributed Energy Resources (DER). Unlike wind which is suitable for only selected locations, PV panels can generate electricity almost anywhere. Pakistan is currently one of the most energy deficient countries in the world. In order to mitigate this shortage the Government has recently announced a policy of net-metering for residential consumers. After wide spread adoption of DERs, one of the issues that will be faced by load management centers would be accurate estimate of the amount of electricity being injected in the grid at any given time through these DERs. This becomes a critical issue once the penetration of DER increases beyond a certain limit. Grid stability and management of harmonics becomes an important consideration where electricity is being injected at the distribution level and through solid state controllers instead of rotating machinery. This paper presents a solution using graph theoretic methods for the estimation of total electricity being injected in the grid in a wide spread geographical area. An agent based consensus approach for distributed computation is being used to provide an estimate under varying generation conditions.
Analysis of the numerical differentiation formulas of functions with large gradients
NASA Astrophysics Data System (ADS)
Tikhovskaya, S. V.
2017-10-01
The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.
Thomopoulos, N; Grant-Muller, S; Tight, M R
2009-11-01
Interest has re-emerged on the issue of how to incorporate equity considerations in the appraisal of transport projects and large road infrastructure projects in particular. This paper offers a way forward in addressing some of the theoretical and practical concerns that have presented difficulties to date in incorporating equity concerns in the appraisal of such projects. Initially an overview of current practice within transport regarding the appraisal of equity considerations in Europe is offered based on an extensive literature review. Acknowledging the value of a framework approach, research towards introducing a theoretical framework is then presented. The proposed framework is based on the well established MCA Analytic Hierarchy Process and is also contrasted with the use of a CBA based approach. The framework outlined here offers an additional support tool to decision makers who will be able to differentiate choices based on their views on specific equity principles and equity types. It also holds the potential to become a valuable tool for evaluators as a result of the option to assess predefined equity perspectives of decision makers against both the project objectives and the estimated project impacts. This framework may also be of further value to evaluators outside transport.
Van den Bulcke, Marc; Lievens, Antoon; Barbau-Piednoir, Elodie; MbongoloMbella, Guillaume; Roosens, Nancy; Sneyers, Myriam; Casi, Amaya Leunda
2010-03-01
The detection of genetically modified (GM) materials in food and feed products is a complex multi-step analytical process invoking screening, identification, and often quantification of the genetically modified organisms (GMO) present in a sample. "Combinatory qPCR SYBRGreen screening" (CoSYPS) is a matrix-based approach for determining the presence of GM plant materials in products. The CoSYPS decision-support system (DSS) interprets the analytical results of SYBRGREEN qPCR analysis based on four values: the C(t)- and T(m) values and the LOD and LOQ for each method. A theoretical explanation of the different concepts applied in CoSYPS analysis is given (GMO Universe, "Prime number tracing", matrix/combinatory approach) and documented using the RoundUp Ready soy GTS40-3-2 as an example. By applying a limited set of SYBRGREEN qPCR methods and through application of a newly developed "prime number"-based algorithm, the nature of subsets of corresponding GMO in a sample can be determined. Together, these analyses provide guidance for semi-quantitative estimation of GMO presence in a food and feed product.
Yamaguchi, Sachi; Seki, Satoko; Sawada, Kota; Takahashi, Satoshi
2013-01-21
Sex change is known from various fish species. In many polygynous species, the largest female usually changes sex to male when the dominant male disappeared, as predicted by the classical size-advantage model. However, in some fishes, the disappearance of male often induces sex change by a smaller female, instead of the largest one. The halfmoon triggerfish Sufflamen chrysopterum is one of such species. We conducted both field investigation and theoretical analysis to test the hypothesis that variation in female fecundity causes the sex change by less-fertile females, even if they are not the largest. We estimated the effect of body length and residual body width (an indicator of nutrition status) on clutch size based on field data. Sex-specific growth rates were also estimated from our investigation and a previous study. We incorporated these estimated value into an evolutionarily stable strategy model for status-dependent size at sex change. As a result, we predict that rich females change sex at a larger size than poor ones, since a rich fish can achieve high reproductive success as a female. In some situations, richer females no longer change sex (i.e. lifelong females), and poorer fish changes sex just after maturation (i.e. primary males). We also analyzed the effect of size-specific growth and mortality. Copyright © 2012 Elsevier Ltd. All rights reserved.
Steiner, Silvan
2018-01-01
The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented.
Kelbe, David; Oak Ridge National Lab.; van Aardt, Jan; ...
2016-10-18
Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in ordermore » to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. Lastly, this paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm.« less
Steiner, Silvan
2018-01-01
The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented. PMID:29623057
Verification of the Velocity Structure in Mexico Basin Using the H/V Spectral Ratio of Microtremors
NASA Astrophysics Data System (ADS)
Matsushima, S.; Sanchez-Sesma, F. J.; Nagashima, F.; Kawase, H.
2011-12-01
The authors have been proposing a new theory to calculate the Horizontal-to-Vertical (H/V) spectral ratio of microtremors assuming that the wave field is completely diffuse and have attempted to apply the theory to understand the observed microtremor data. It is anticipated that this new theory can be applied to detect the subsurface velocity structure beneath urban area. Precise information about the subsurface velocity structure is essential for predicting strong ground motion accurately, which is necessary to mitigate seismic disaster. Mexico basin, who witnessed severe damage during the 1985 Michoacán Earthquake (Ms 8.1) several hundreds of kilometers away from the source region, is an interesting location in which the reassessment of soil properties is urgent. Because of subsidence, having improved estimates of properties is mandatory. In order to estimate possible changes in the velocity structure in the Mexico basin, we measured microtremors at strong motion observation sites in Mexico City. At those sites, information about the velocity profiles are available. Using the obtained data, we derive observed H/V spectral ratio and compare it with the theoretical H/V spectral ratio to gauge the goodness of our new theory. First we compared the observed H/V spectral ratios for five stations to see the diverse characteristics of this measurement. Then we compared the observed H/V spectral ratios with the theoretical predictions to confirm our theory. We assumed the velocity model of previous surveys at the strong motions observation sites as an initial model. We were able to closely fit both the peak frequency and amplitude of the observed H/V spectral ratio, by the theoretical H/V spectral ratio calculated by our new method. These results show that we have a good initial model. However, the theoretical estimates need some improvement to perfectly fit the observed H/V spectral ratio. This may be an indication that the initial model needs some adjustments. We explore how to improve the velocity model based on the comparison between observations and theory.
a Protocol for High-Accuracy Theoretical Thermochemistry
NASA Astrophysics Data System (ADS)
Welch, Bradley; Dawes, Richard
2017-06-01
Theoretical studies of spectroscopy and reaction dynamics including the necessary development of potential energy surfaces rely on accurate thermochemical information. The Active Thermochemical Tables (ATcT) approach by Ruscic^{1} incorporates data for a large number of chemical species from a variety of sources (both experimental and theoretical) and derives a self-consistent network capable of making extremely accurate estimates of quantities such as temperature dependent enthalpies of formation. The network provides rigorous uncertainties, and since the values don't rely on a single measurement or calculation, the provenance of each quantity is also obtained. To expand and improve the network it is desirable to have a reliable protocol such as the HEAT approach^{2} for calculating accurate theoretical data. Here we present and benchmark an approach based on explicitly-correlated coupled-cluster theory and vibrational perturbation theory (VPT2). Methyldioxy and Methyl Hydroperoxide are important and well-characterized species in combustion processes and begin the family of (ethyl-, propyl-based, etc) similar compounds (much less is known about the larger members). Accurate anharmonic frequencies are essential to accurately describe even the 0 K enthalpies of formation, but are especially important for finite temperature studies. Here we benchmark the spectroscopic and thermochemical accuracy of the approach, comparing with available data for the smallest systems, and comment on the outlook for larger systems that are less well-known and characterized. ^{1}B. Ruscic, Active Thermochemical Tables (ATcT) values based on ver. 1.118 of the Thermochemical Network (2015); available at ATcT.anl.gov ^{2}A. Tajti, P. G. Szalay, A. G. Császár, M. Kállay, J. Gauss, E. F. Valeev, B. A. Flowers, J. Vázquez, and J. F. Stanton. JCP 121, (2004): 11599.
Three-dimensional ultrasound strain imaging of skeletal muscles
NASA Astrophysics Data System (ADS)
Gijsbertse, K.; Sprengers, A. M. J.; Nillesen, M. M.; Hansen, H. H. G.; Lopata, R. G. P.; Verdonschot, N.; de Korte, C. L.
2017-01-01
In this study, a multi-dimensional strain estimation method is presented to assess local relative deformation in three orthogonal directions in 3D space of skeletal muscles during voluntary contractions. A rigid translation and compressive deformation of a block phantom, that mimics muscle contraction, is used as experimental validation of the 3D technique and to compare its performance with respect to a 2D based technique. Axial, lateral and (in case of 3D) elevational displacements are estimated using a cross-correlation based displacement estimation algorithm. After transformation of the displacements to a Cartesian coordinate system, strain is derived using a least-squares strain estimator. The performance of both methods is compared by calculating the root-mean-squared error of the estimated displacements with the calculated theoretical displacements of the phantom experiments. We observe that the 3D technique delivers more accurate displacement estimations compared to the 2D technique, especially in the translation experiment where out-of-plane motion hampers the 2D technique. In vivo application of the 3D technique in the musculus vastus intermedius shows good resemblance between measured strain and the force pattern. Similarity of the strain curves of repetitive measurements indicates the reproducibility of voluntary contractions. These results indicate that 3D ultrasound is a valuable imaging tool to quantify complex tissue motion, especially when there is motion in three directions, which results in out-of-plane errors for 2D techniques.
Monitoring water phase dynamics in winter clouds
NASA Astrophysics Data System (ADS)
Campos, Edwin F.; Ware, Randolph; Joe, Paul; Hudak, David
2014-10-01
This work presents observations of water phase dynamics that demonstrate the theoretical Wegener-Bergeron-Findeisen concepts in mixed-phase winter storms. The work analyzes vertical profiles of air vapor pressure, and equilibrium vapor pressure over liquid water and ice. Based only on the magnitude ranking of these vapor pressures, we identified conditions where liquid droplets and ice particles grow or deplete simultaneously, as well as the conditions where droplets evaporate and ice particles grow by vapor diffusion. The method is applied to ground-based remote-sensing observations during two snowstorms, using two distinct microwave profiling radiometers operating in different climatic regions (North American Central High Plains and Great Lakes). The results are compared with independent microwave radiometer retrievals of vertically integrated liquid water, cloud-base estimates from a co-located ceilometer, reflectivity factor and Doppler velocity observations by nearby vertically pointing radars, and radiometer estimates of liquid water layers aloft. This work thus makes a positive contribution toward monitoring and nowcasting the evolution of supercooled droplets in winter clouds.
Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie
2017-11-10
A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.
Monitoring water phase dynamics in winter clouds
Campos, Edwin F.; Ware, Randolph; Joe, Paul; ...
2014-10-01
This work presents observations of water phase dynamics that demonstrate the theoretical Wegener–Bergeron–Findeisen concepts in mixed-phase winter storms. The work analyzes vertical profiles of air vapor pressure, and equilibrium vapor pressure over liquid water and ice. Based only on the magnitude ranking of these vapor pressures, we identified conditions where liquid droplets and ice particles grow or deplete simultaneously, as well as the conditions where droplets evaporate and ice particles grow by vapor diffusion. The method is applied to ground-based remote-sensing observations during two snowstorms, using two distinct microwave profiling radiometers operating in different climatic regions (North American Central Highmore » Plains and Great Lakes). The results are compared with independent microwave radiometer retrievals of vertically integrated liquid water, cloud-base estimates from a co-located ceilometer, reflectivity factor and Doppler velocity observations by nearby vertically pointing radars, and radiometer estimates of liquid water layers aloft. This work thus makes a positive contribution toward monitoring and now casting the evolution of supercooled droplets in winter clouds.« less
NASA Astrophysics Data System (ADS)
Farrow, Scott; Scott, Michael
2013-05-01
Floods are risky events ranging from small to catastrophic. Although expected flood damages are frequently used for economic policy analysis, alternative measures such as option price (OP) and cumulative prospect value exist. The empirical magnitude of these measures whose theoretical preference is ambiguous is investigated using case study data from Baltimore City. The outcome for the base case OP measure increases mean willingness to pay over the expected damage value by about 3%, a value which is increased with greater risk aversion, reduced by increased wealth, and only slightly altered by higher limits of integration. The base measure based on cumulative prospect theory is about 46% less than expected damages with estimates declining when alternative parameters are used. The method of aggregation is shown to be important in the cumulative prospect case which can lead to an estimate up to 41% larger than expected damages. Expected damages remain a plausible and the most easily computed measure for analysts.
Precise predictions for V+jets dark matter backgrounds
NASA Astrophysics Data System (ADS)
Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.
2017-12-01
High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane
de Almeida, Valmor F.; Hart, Kevin J.
2017-01-03
A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less
Frequency domain analysis of errors in cross-correlations of ambient seismic noise
NASA Astrophysics Data System (ADS)
Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri
2016-12-01
We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method is used to account for temporal correlation of noise cross-spectrum at low frequencies (0.05-0.2 Hz) near the ocean microseismic peaks.
Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, Valmor F.; Hart, Kevin J.
A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less
Analysis of Gas Membrane Ultra-High Purification of Small Quantities of Mono-Isotopic Silane
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, Valmor F.; Hart, Kevin J.
A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similarmore » to silane. Other potential problematic surprises are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. To improve the separation selectivity, it is advantageous to use a permeate chamber under vacuum, however this also requires greater control of in-leakage of impurities in the system. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. Last but not least, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less
Grace, J.B.; Bollen, K.A.
2008-01-01
Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.
Prediction of the amount of urban waste solids by applying a gray theoretical model.
Li, Xiao-Ming; Zeng, Guang-Ming; Wang, Ming; Liu, Jin-Jin
2003-01-01
Urban waste solids are now becoming one of the most crucial environmental problems. There are several different kinds of technologies normally used for waste solids disposal, among which landfill is more favorable in China than others, especially for urban waste solids. Most of the design works up to now are based on a roughly estimation of the amount of urban waste solids without any theoretical support, which lead to a series problems. To meet the basic information requirements for the design work, the amount of the urban waste solids was predicted in this research by applying the gray theoretical model GM (1,1) through non-linear differential equation simulation. The model parameters were estimated with the least square method (LSM) by running a certain MATALAB program, and the hypothesis test results show that the residual between the prediction value and the actual value approximately comply with the normal distribution N (0, 0.21(2)), and the probability of the residual within the range ( -0.17, 0.19) is more than 95%, which indicate obviously that the model can be well used for the prediction of the amount of waste solids and those had been already testified by the latest two years data about the urban waste solids from Loudi City of China. With this model, the predicted amount of the waste solids produced in Loudi City in the next 30 years is 8049000 ton in total.
NASA Astrophysics Data System (ADS)
Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran. O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer; Kulkarni, Shrinivas R.; Ben-Ami, Sagi; Kasliwal, Mansi M.; The ULTRASAT Science Team; Chelouche, Doron; Rafter, Stephen; Behar, Ehud; Laor, Ari; Poznanski, Dovi; Nakar, Ehud; Maoz, Dan; Trakhtenbrot, Benny; WTTH Consortium, The; Neill, James D.; Barlow, Thomas A.; Martin, Christofer D.; Gezari, Suvi; the GALEX Science Team; Arcavi, Iair; Bloom, Joshua S.; Nugent, Peter E.; Sullivan, Mark; Palomar Transient Factory, The
2016-03-01
The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R ⊙, explosion energies of 1051 erg, and ejecta masses of 10 M ⊙. Exploding blue supergiants and Wolf-Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (˜0.5 SN per deg2), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.
A Feature-based Approach to Big Data Analysis of Medical Images
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685
A Feature-Based Approach to Big Data Analysis of Medical Images.
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.
NASA Astrophysics Data System (ADS)
Uebbing, Bernd; Roscher, Ribana; Kusche, Jürgen
2016-04-01
Satellite radar altimeters allow global monitoring of mean sea level changes over the last two decades. However, coastal regions are less well observed due to influences on the returned signal energy by land located inside the altimeter footprint. The altimeter emits a radar pulse, which is reflected at the nadir-surface and measures the two-way travel time, as well as the returned energy as a function of time, resulting in a return waveform. Over the open ocean the waveform shape corresponds to a theoretical model which can be used to infer information on range corrections, significant wave height or wind speed. However, in coastal areas the shape of the waveform is significantly influenced by return signals from land, located in the altimeter footprint, leading to peaks which tend to bias the estimated parameters. Recently, several approaches dealing with this problem have been published, including utilizing only parts of the waveform (sub-waveforms), estimating the parameters in two steps or estimating additional peak parameters. We present a new approach in estimating sub-waveforms using conditional random fields (CRF) based on spatio-temporal waveform information. The CRF piece-wise approximates the measured waveforms based on a pre-derived dictionary of theoretical waveforms for various combinations of the geophysical parameters; neighboring range gates are likely to be assigned to the same underlying sub-waveform model. Depending on the choice of hyperparameters in the CRF estimation, the classification into sub-waveforms can either be more fine or coarse resulting in multiple sub-waveform hypotheses. After the sub-waveforms have been detected, existing retracking algorithms can be applied to derive water heights or other desired geophysical parameters from particular sub-waveforms. To identify the optimal heights from the multiple hypotheses, instead of utilizing a known reference height, we apply a Dijkstra-algorithm to find the "shortest path" of all possible heights. We apply our approach to Jason-2 data in different coastal areas, such as the Bangladesh coast or in the North Sea and compare our sea surface heights to various existing retrackers. Using the sub-waveform approach, we are able to derive meaningful water heights up to a few kilometers off the coast, where conventional retrackers, such as the standard ocean retracker, no longer provide useful data.
Space-based infrared scanning sensor LOS determination and calibration using star observation
NASA Astrophysics Data System (ADS)
Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang
2015-10-01
This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.
Performance bounds for matched field processing in subsurface object detection applications
NASA Astrophysics Data System (ADS)
Sahin, Adnan; Miller, Eric L.
1998-09-01
In recent years there has been considerable interest in the use of ground penetrating radar (GPR) for the non-invasive detection and localization of buried objects. In a previous work, we have considered the use of high resolution array processing methods for solving these problems for measurement geometries in which an array of electromagnetic receivers observes the fields scattered by the subsurface targets in response to a plane wave illumination. Our approach uses the MUSIC algorithm in a matched field processing (MFP) scheme to determine both the range and the bearing of the objects. In this paper we derive the Cramer-Rao bounds (CRB) for this MUSIC-based approach analytically. Analysis of the theoretical CRB has shown that there exists an optimum inter-element spacing of array elements for which the CRB is minimum. Furthermore, the optimum inter-element spacing minimizing CRB is smaller than the conventional half wavelength criterion. The theoretical bounds are then verified for two estimators using Monte-Carlo simulations. The first estimator is the MUSIC-based MFP and the second one is the maximum likelihood based MFP. The two approaches differ in the cost functions they optimize. We observe that Monte-Carlo simulated error variances always lie above the values established by CRB. Finally, we evaluate the performance of our MUSIC-based algorithm in the presence of model mismatches. Since the detection algorithm strongly depends on the model used, we have tested the performance of the algorithm when the object radius used in the model is different from the true radius. This analysis reveals that the algorithm is still capable of localizing the objects with a bias depending on the degree of mismatch.
Kubínová, Zuzana
2014-01-01
Chloroplast number per cell is a frequently examined quantitative anatomical parameter, often estimated by counting chloroplast profiles in two-dimensional (2D) sections of mesophyll cells. However, a mesophyll cell is a three-dimensional (3D) structure and this has to be taken into account when quantifying its internal structure. We compared 2D and 3D approaches to chloroplast counting from different points of view: (i) in practical measurements of mesophyll cells of Norway spruce needles, (ii) in a 3D model of a mesophyll cell with chloroplasts, and (iii) using a theoretical analysis. We applied, for the first time, the stereological method of an optical disector based on counting chloroplasts in stacks of spruce needle optical cross-sections acquired by confocal laser-scanning microscopy. This estimate was compared with counting chloroplast profiles in 2D sections from the same stacks of sections. Comparing practical measurements of mesophyll cells, calculations performed in a 3D model of a cell with chloroplasts as well as a theoretical analysis showed that the 2D approach yielded biased results, while the underestimation could be up to 10-fold. We proved that the frequently used method for counting chloroplasts in a mesophyll cell by counting their profiles in 2D sections did not give correct results. We concluded that the present disector method can be efficiently used for unbiased estimation of chloroplast number per mesophyll cell. This should be the method of choice, especially in coniferous needles and leaves with mesophyll cells with lignified cell walls where maceration methods are difficult or impossible to use. PMID:24336344
Molecular system identification for enzyme directed evolution and design
NASA Astrophysics Data System (ADS)
Guan, Xiangying; Chakrabarti, Raj
2017-09-01
The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.
Detection, Identification, Location, and Remote Sensing using SAW RFID Sensor Tags
NASA Technical Reports Server (NTRS)
Barton, Richard J.
2009-01-01
In this presentation, we will consider the problem of simultaneous detection, identification, location estimation, and remote sensing for multiple objects. In particular, we will describe the design and testing of a wireless system capable of simultaneously detecting the presence of multiple objects, identifying each object, and acquiring both a low-resolution estimate of location and a high-resolution estimate of temperature for each object based on wireless interrogation of passive surface acoustic wave (SAW) radiofrequency identification (RFID) sensor tags affixed to each object. The system is being studied for application on the lunar surface as well as for terrestrial remote sensing applications such as pre-launch monitoring and testing of spacecraft on the launch pad and monitoring of test facilities. The system utilizes a digitally beam-formed planar receiving antenna array to extend range and provide direction-of-arrival information coupled with an approximate maximum-likelihood signal processing algorithm to provide near-optimal estimation of both range and temperature. The system is capable of forming a large number of beams within the field of view and resolving the information from several tags within each beam. The combination of both spatial and waveform discrimination provides the capability to track and monitor telemetry from a large number of objects appearing simultaneously within the field of view of the receiving array. In the presentation, we will summarize the system design and illustrate several aspects of the operational characteristics and signal structure. We will examine the theoretical performance characteristics of the system and compare the theoretical results with results obtained from experiments in both controlled laboratory environments and in the field.
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
Vedenov, Dmitry; Alhotan, Rashed A; Wang, Runlian; Pesti, Gene M
2017-02-01
Nutritional requirements and responses of all organisms are estimated using various models representing the response to different dietary levels of the nutrient in question. To help nutritionists design experiments for estimating responses and requirements, we developed a simulation workbook using Microsoft Excel. The objective of the present study was to demonstrate the influence of different numbers of nutrient levels, ranges of nutrient levels and replications per nutrient level on the estimates of requirements based on common nutritional response models. The user provides estimates of the shape of the response curve, requirements and other parameters and observation to observation variation. The Excel workbook then produces 1-1000 randomly simulated responses based on the given response curve and estimates the standard errors of the requirement (and other parameters) from different models as an indication of the expected power of the experiment. Interpretations are based on the assumption that the smaller the standard error of the requirement, the more powerful the experiment. The user can see the potential effects of using one or more subjects, different nutrient levels, etc., on the expected outcome of future experiments. From a theoretical perspective, each organism should have some enzyme-catalysed reaction whose rate is limited by the availability of some limiting nutrient. The response to the limiting nutrient should therefore be similar to enzyme kinetics. In conclusion, the workbook eliminates some of the guesswork involved in designing experiments and determining the minimum number of subjects needed to achieve desired outcomes.
Population genetics of autopolyploids under a mixed mating model and the estimation of selfing rate.
Hardy, Olivier J
2016-01-01
Nowadays, the population genetics analysis of autopolyploid species faces many difficulties due to (i) limited development of population genetics tools under polysomic inheritance, (ii) difficulties to assess allelic dosage when genotyping individuals and (iii) a form of inbreeding resulting from the mechanism of 'double reduction'. Consequently, few data analysis computer programs are applicable to autopolyploids. To contribute bridging this gap, this article first derives theoretical expectations for the inbreeding and identity disequilibrium coefficients under polysomic inheritance in a mixed mating model. Moment estimators of these coefficients are proposed when exact genotypes or just markers phenotypes (i.e. allelic dosage unknown) are available. This led to the development of estimators of the selfing rate based on adult genotypes or phenotypes and applicable to any even-ploidy level. Their statistical performances and robustness were assessed by numerical simulations. Contrary to inbreeding-based estimators, the identity disequilibrium-based estimator using phenotypes is robust (absolute bias generally < 0.05), even in the presence of double reduction, null alleles or biparental inbreeding due to isolation by distance. A fairly good precision of the selfing rate estimates (root mean squared error < 0.1) is already achievable using a sample of 30-50 individuals phenotyped at 10 loci bearing 5-10 alleles each, conditions reachable using microsatellite markers. Diallelic markers (e.g. SNP) can also perform satisfactorily in diploids and tetraploids but more polymorphic markers are necessary for higher ploidy levels. The method is implemented in the software SPAGeDi and should contribute to reduce the lack of population genetics tools applicable to autopolyploids. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray
2016-06-01
Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.
Anchoring in Numeric Judgments of Visual Stimuli
Langeborg, Linda; Eriksson, Mårten
2016-01-01
This article investigates effects of anchoring in age estimation and estimation of quantities, two tasks which to different extents are based on visual stimuli. The results are compared to anchoring in answers to classic general knowledge questions that rely on semantic knowledge. Cognitive load was manipulated to explore possible differences between domains. Effects of source credibility, manipulated by differing instructions regarding the selection of anchor values (no information regarding anchor selection, information that the anchors are randomly generated or information that the anchors are answers from an expert) on anchoring were also investigated. Effects of anchoring were large for all types of judgments but were not affected by cognitive load or by source credibility in either one of the researched domains. A main effect of cognitive load on quantity estimations and main effects of source credibility in the two visually based domains indicate that the manipulations were efficient. Implications for theoretical explanations of anchoring are discussed. In particular, because anchoring did not interact with cognitive load, the results imply that the process behind anchoring in visual tasks is predominantly automatic and unconscious. PMID:26941684
The demonstration of significant ferroelectricity in epitaxial Y-doped HfO2 film
Shimizu, Takao; Katayama, Kiliha; Kiguchi, Takanori; Akama, Akihiro; Konno, Toyohiko J.; Sakata, Osami; Funakubo, Hiroshi
2016-01-01
Ferroelectricity and Curie temperature are demonstrated for epitaxial Y-doped HfO2 film grown on (110) yttrium oxide-stabilized zirconium oxide (YSZ) single crystal using Sn-doped In2O3 (ITO) as bottom electrodes. The XRD measurements for epitaxial film enabled us to investigate its detailed crystal structure including orientations of the film. The ferroelectricity was confirmed by electric displacement filed – electric filed hysteresis measurement, which revealed saturated polarization of 16 μC/cm2. Estimated spontaneous polarization based on the obtained saturation polarization and the crystal structure analysis was 45 μC/cm2. This value is the first experimental estimations of the spontaneous polarization and is in good agreement with the theoretical value from first principle calculation. Curie temperature was also estimated to be about 450 °C. This study strongly suggests that the HfO2-based materials are promising for various ferroelectric applications because of their comparable ferroelectric properties including polarization and Curie temperature to conventional ferroelectric materials together with the reported excellent scalability in thickness and compatibility with practical manufacturing processes. PMID:27608815
Estimating the greenhouse gas benefits of forestry projects: A Costa Rican Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, Christopher; Sathaye, Jayant; Sanchez Azofeifa, G. Arturo
If the Clean Development Mechanism proposed under the Kyoto Protocol is to serve as an effective means for combating global climate change, it will depend upon reliable estimates of greenhouse gas benefits. This paper sketches the theoretical basis for estimating the greenhouse gas benefits of forestry projects and suggests lessons learned based on a case study of Costa Rica's Protected Areas Project, which is a 500,000 hectare effort to reduce deforestation and enhance reforestation. The Protected Areas Project in many senses advances the state of the art for Clean Development Mechanism-type forestry projects, as does the third-party verification work ofmore » SGS International Certification Services on the project. Nonetheless, sensitivity analysis shows that carbon benefit estimates for the project vary widely based on the imputed deforestation rate in the baseline scenario, e.g. the deforestation rate expected if the project were not implemented. This, along with a newly available national dataset that confirms other research showing a slower rate of deforestation in Costa Rica, suggests that the use of the 1979--1992 forest cover data originally as the basis for estimating carbon savings should be reconsidered. When the newly available data is substituted, carbon savings amount to 8.9 Mt (million tones) of carbon, down from the original estimate of 15.7 Mt. The primary general conclusion is that project developers should give more attention to the forecasting land use and land cover change scenarios underlying estimates of greenhouse gas benefits.« less
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.
Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng
2018-05-14
In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.
Jeong, Hyunjo; Nahm, Seung-Hoon; Jhang, Kyung-Young; Nam, Young-Hyun
2003-09-01
The objective of this paper is to develop a nondestructive method for estimating the fracture toughness (K(IC)) of CrMoV steels used as the rotor material of steam turbines in power plants. To achieve this objective, a number of CrMoV steel samples were heat-treated, and the fracture appearance transition temperature (FATT) was determined as a function of aging time. Nonlinear ultrasonics was employed as the theoretical basis to explain the harmonic generation in a damaged material, and the nonlinearity parameter of the second harmonic wave was the experimental measure used to be correlated to the fracture toughness of the rotor steel. The nondestructive procedure for estimating the K(IC) consists of two steps. First, the correlations between the nonlinearity parameter and the FATT are sought. The FATT values are then used to estimate K(IC) using the K(IC) versus excess temperature (i.e., T-FATT) correlation that is available in the literature for CrMoV rotor steel.
Dynamic equilibrium of reconstituting hematopoietic stem cell populations.
O'Quigley, John
2010-12-01
Clonal dominance in hematopoietic stem cell populations is an important question of interest but not one we can directly answer. Any estimates are based on indirect measurement. For marked populations, we can equate empirical and theoretical moments for binomial sampling, in particular we can use the well-known formula for the sampling variation of a binomial proportion. The empirical variance itself cannot always be reliably estimated and some caution is needed. We describe the difficulties here and identify ready solutions which only require appropriate use of variance-stabilizing transformations. From these we obtain estimators for the steady state, or dynamic equilibrium, of the number of hematopoietic stem cells involved in repopulating the marrow. The calculations themselves are not too involved. We give the distribution theory for the estimator as well as simple approximations for practical application. As an illustration, we rework on data recently gathered to address the question as to whether or not reconstitution of marrow grafts in the clinical setting might be considered to be oligoclonal.
Morris, Martina; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna
2014-01-01
We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs. PMID:24077973
Morris, Martina; Vu, Lung; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna
2014-04-01
We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs.
Wave Period and Coastal Bathymetry Estimations from Satellite Images
NASA Astrophysics Data System (ADS)
Danilo, Celine; Melgani, Farid
2016-08-01
We present an approach for wave period and coastal water depth estimation. The approach based on wave observations, is entirely independent of ancillary data and can theoretically be applied to SAR or optical images. In order to demonstrate its feasibility we apply our method to more than 50 Sentinel-1A images of the Hawaiian Islands, well-known for its long waves. Six wave buoys are available to compare our results with in-situ measurements. The results on Sentinel-1A images show that half of the images were unsuitable for applying the method (no swell or wavelength too small to be captured by the SAR). On the other half, 78% of the estimated wave periods are in accordance with buoy measurements. In addition, we present preliminary results of the estimation of the coastal water depth on a Landsat-8 image (with characteristics close to Sentinel-2A). With a squared correlation coefficient of 0.7 for ground truth measurement, this approach reveals promising results for monitoring coastal bathymetry.
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
Relation between energy radiation ratio and rupture speed in numerically simulated earthquakes
NASA Astrophysics Data System (ADS)
Noda, H.; Lapusta, N.; Kanamori, H.
2011-12-01
One of the prominent questions in seismology is energy partitioning during an earthquake. Venkataraman and Kanamori [2004] discussed radiation ratio η_R, the ratio of radiated energy E_R to partial strain energy change ΔW_0 which is the total released strain energy minus the energy that would have been dissipated if a fault had slipped at the final stress. They found positive correlation between η_R and rupture speed in large earthquakes, and compared these data with theoretical estimates from simplified models. The relation between η_R and rupture speed is of great interest since both quantities can be estimated independently although there are large uncertainties. We conduct numerical simulations of dynamic ruptures and study the obtained energy partitioning (and η_R) and averaged rupture speeds V_r. So far, we have considered problems based on TPV103 from the SCEC/USGS Spontaneous Rupture Code Verification Project [Harris et al., 2009, http://scecdata.usc.edu/cvws/], which is a 3-D problem with the possibility of remarkable rate weakening at coseismic slip rates caused by flash heating of microscopic asperities [Rice, 1999]. We study the effect of background shear stress level τ_b and the manner in which rupture is arrested, either in rate-strengthening or unbreakable areas of the fault. Note that rupture speed at each fault point is defined when the rupture is still in progress, while η_R is defined after all dynamic processes such as propagation of a rupture front, healing fronts, and seismic waves have been completed. Those complexities may cause a difference from the theoretical estimates based on simple models, an issue we explore in this study. Overall, our simulations produce the relation between η_R and V_r broadly consistent with the study of Venkataraman and Kanamori (2004) for natural earthquakes and the corresponding theoretical estimates. The model by Mott [1948] agrees best with the cases studied so far, although it is not rigorously correct [Freund, 1990]. For example, a case which is similar to TPV103 except in the nucleation procedure yields a pulse-like rupture with a spatially averaged rupture speed V_r = 0.59 c_s and η_R = 0.32, while the theoretical estimates [Fossum and Freund, 1975 for mode II and Kostrov, 1966; Ehselby, 1969 for mode III] predict η_R of about 0.5 for this rupture speed. This difference is not significant compared with the large observational error. As τ_b increases, V_r increases monotonically, while η_R exhibits more complex behavior: it increases with τ_b for pulse-like ruptures, decreases by about 0.1 at the transition to crack-like ruptures, and then increases again. Frictional dissipation is significant when a rupture front reaches a rate-strengthening region. If the barrier is changed to an unbreakable region, η_R decreases and V_r/c_s increases at most by 0.3 and 0.1, respectively. Although sharper arrest of rupture causes larger E_R per seismic moment due to the stopping phases, ΔW_0 per seismic moment increases more remarkably due to large wavenumber components in final slip distribution.
Counting defects in an instantaneous quench.
Ibaceta, D; Calzetta, E
1999-09-01
We consider the formation of defects in a nonequilibrium second-order phase transition induced by an instantaneous quench to zero temperature in a type II superconductor. We perform a full nonlinear simulation where we follow the evolution in time of the local order parameter field. We determine how far into the phase transition theoretical estimates of the defect density based on the Gaussian approximation yield a reliable prediction for the actual density. We also characterize quantitatively some aspects of the out of equilibrium phase transition.
Remarks on a financial inverse problem by means of Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Cuomo, Salvatore; Di Somma, Vittorio; Sica, Federica
2017-10-01
Estimating the price of a barrier option is a typical inverse problem. In this paper we present a numerical and statistical framework for a market with risk-free interest rate and a risk asset, described by a Geometric Brownian Motion (GBM). After approximating the risk asset with a numerical method, we find the final option price by following an approach based on sequential Monte Carlo methods. All theoretical results are applied to the case of an option whose underlying is a real stock.
Ionospheric propagation correction modeling for satellite altimeters
NASA Technical Reports Server (NTRS)
Nesterczuk, G.
1981-01-01
The theoretical basis and avaliable accuracy verifications were reviewed and compared for ionospheric correction procedures based on a global ionsopheric model driven by solar flux, and a technique in which measured electron content (using Faraday rotation measurements) for one path is mapped into corrections for a hemisphere. For these two techniques, RMS errors for correcting satellite altimeters data (at 14 GHz) are estimated to be 12 cm and 3 cm, respectively. On the basis of global accuracy and reliability after implementation, the solar flux model is recommended.
NASA Technical Reports Server (NTRS)
Chang, Ching L.; Jiang, Bo-Nan
1990-01-01
A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.
Stellar Atmospheric Parameterization Based on Deep Learning
NASA Astrophysics Data System (ADS)
Pan, Ru-yang; Li, Xiang-ru
2017-07-01
Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.
Cryogen-free heterodyne-enhanced mid-infrared Faraday rotation spectrometer
Wang, Yin; Nikodem, Michal; Wysocki, Gerard
2013-01-01
A new detection method for Faraday rotation spectra of paramagnetic molecular species is presented. Near shot-noise limited performance in the mid-infrared is demonstrated using a heterodyne enhanced Faraday rotation spectroscopy (H-FRS) system without any cryogenic cooling. Theoretical analysis is performed to estimate the ultimate sensitivity to polarization rotation for both heterodyne and conventional FRS. Sensing of nitric oxide (NO) has been performed with an H-FRS system based on thermoelectrically cooled 5.24 μm quantum cascade laser (QCL) and a mercury-cadmium-telluride photodetector. The QCL relative intensity noise that dominates at low frequencies is largely avoided by performing the heterodyne detection in radio frequency range. H-FRS exhibits a total noise level of only 3.7 times the fundamental shot noise. The achieved sensitivity to polarization rotation of 1.8 × 10−8 rad/Hz1/2 is only 5.6 times higher than the ultimate theoretical sensitivity limit estimated for this system. The path- and bandwidth-normalized NO detection limit of 3.1 ppbv-m/Hz1/2 was achieved using the R(17/2) transition of NO at 1906.73 cm−1. PMID:23388967
Theoretical impact of insecticide-impregnated school uniforms on dengue incidence in Thai children.
Massad, Eduardo; Amaku, Marcos; Coutinho, Francisco Antonio Bezerra; Kittayapong, Pattamaporn; Wilder-Smith, Annelies
2013-03-28
Children carry the main burden of morbidity and mortality caused by dengue. Children spend a considerable amount of their day at school; hence strategies that reduce human-mosquito contact to protect against the day-biting habits of Aedes mosquitoes at schools, such as insecticide-impregnated uniforms, could be an effective prevention strategy. We used mathematical models to calculate the risk of dengue infection based on force of infection taking into account the estimated proportion of mosquito bites that occur in school and the proportion of school time that children wear the impregnated uniforms. The use of insecticide-impregnated uniforms has efficacy varying from around 6% in the most pessimistic estimations, to 55% in the most optimistic scenarios simulated. Reducing contact between mosquito bites and human hosts via insecticide-treated uniforms during school time is theoretically effective in reducing dengue incidence and may be a valuable additional tool for dengue control in school-aged children. The efficacy of this strategy, however, is dependent on the compliance of the target population in terms of proper and consistent wearing of uniforms and, perhaps more importantly, the proportion of bites inflicted by the Aedes population during school time.
Theoretical impact of insecticide-impregnated school uniforms on dengue incidence in Thai children
Massad, Eduardo; Amaku, Marcos; Coutinho, Francisco Antonio Bezerra; Kittayapong, Pattamaporn; Wilder-Smith, Annelies
2013-01-01
Background Children carry the main burden of morbidity and mortality caused by dengue. Children spend a considerable amount of their day at school; hence strategies that reduce human–mosquito contact to protect against the day-biting habits of Aedes mosquitoes at schools, such as insecticide-impregnated uniforms, could be an effective prevention strategy. Methodology We used mathematical models to calculate the risk of dengue infection based on force of infection taking into account the estimated proportion of mosquito bites that occur in school and the proportion of school time that children wear the impregnated uniforms. Principal findings The use of insecticide-impregnated uniforms has efficacy varying from around 6% in the most pessimistic estimations, to 55% in the most optimistic scenarios simulated. Conclusions Reducing contact between mosquito bites and human hosts via insecticide-treated uniforms during school time is theoretically effective in reducing dengue incidence and may be a valuable additional tool for dengue control in school-aged children. The efficacy of this strategy, however, is dependent on the compliance of the target population in terms of proper and consistent wearing of uniforms and, perhaps more importantly, the proportion of bites inflicted by the Aedes population during school time. PMID:23541045
Modeling the utility of binaural cues for underwater sound localization.
Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo
2014-06-01
The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
Nuclear half-lives for {alpha}-radioactivity of elements with 100 {<=} Z {<=} 130
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, P. Roy; Samanta, C.; Physics Department, Gottwald Science Center, University of Richmond, Richmond, VA 23173
2008-11-15
Theoretical estimates for the half-lives of about 1700 isotopes of heavy elements with 100 {<=} Z {<=} 130 are tabulated using theoretical Q-values. The quantum mechanical tunneling probabilities are calculated within a WKB framework using microscopic nuclear potentials. The microscopic nucleus-nucleus potentials are obtained by folding the densities of interacting nuclei with a density-dependent M3Y effective nucleon-nucleon interaction. The {alpha}-decay half-lives calculated in this formalism using the experimental Q-values were found to be in good agreement over a wide range of experimental data spanning about 20 orders of magnitude. The theoretical Q-values used for the present calculations are extracted frommore » three different mass estimates viz. Myers-Swiatecki, Muntian-Hofmann-Patyk-Sobiczewski, and Koura-Tachibana-Uno-Yamada.« less
NASA Astrophysics Data System (ADS)
Glickson, D.; Holmes, K. J.; Cooke, D.
2012-12-01
Marine and hydrokinetic (MHK) resources are increasingly becoming part of energy regulatory, planning, and marketing activities in the U.S. and elsewhere. In particular, state-based renewable portfolio standards and federal production and investment tax credits have led to an increased interest in the possible deployment of MHK technologies. The Energy Policy Act of 2005 (Public Law 109-58) directed the Department of Energy (DOE) to estimate the size of the MHK resource base. In order to help DOE prioritize its overall portfolio of future research, increase the understanding of the potential for MHK resource development, and direct MHK device and/or project developers to locations of greatest promise, the DOE Wind and Water Power Program requested that the National Research Council (NRC) provide an evaluation of the detailed assessments being conducted by five individual resource assessment groups. These resource assessment groups were contracted to estimate the amount of extractable energy from wave, tidal, ocean current, ocean thermal energy conversion, and riverine resources. Performing these assessments requires that each resource assessment group estimate the average power density of the resource base, as well as the basic technology characteristics and spatial and temporal constituents that convert power into electricity for that resource. The NRC committee evaluated the methodologies, technologies, and assumptions associated with each of these resource assessments. The committee developed a conceptual framework for delineating the processes used to develop the assessment results requested by the DOE, with definitions of the theoretical, technical, and practical resource to clarify elements of the overall resource assessment process. This allowed the NRC committee to make a comparison of different methods, terminology, and processes among the five resource assessment groups. The committee concluded that the overall approach taken by the wave resource and tidal resource assessment groups is a useful contribution to understanding the distribution and possible magnitude of energy sources from waves and tides in U.S. waters, but had concerns regarding the usefulness of aggregating the analysis to produce a "single number" estimate of the total national or regional theoretical and technical resource base. The committee had further concerns about the methodologies and assumptions within each assessment, as well as the limited scope of validation exercises. An interim report was released in July 2011, and the committee's final report will be released in Fall 2012.;
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, H.K.; Novak, T.
2008-03-15
During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less
A quantitative investigation of the fracture pump-in/flowback test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plahn, S.V.; Nolte, K.G.; Thompson, L.G.
1997-02-01
Fracture-closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures (BHP`s) during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test, where strong indications of fracture closure are rarely seen. Various techniques are used to extract closure pressure from the flowback-pressure response. Unfortunately, these techniques give different estimates for closure pressure, and their theoretical bases are not well established. The authors present results that place the PIFB test on a firmer foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. On the basis of their simulation results, they propose interpretation techniques that give better estimates of closure pressure than existing techniques.« less
Tug-of-war lacunarity—A novel approach for estimating lacunarity
NASA Astrophysics Data System (ADS)
Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut
2016-11-01
Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.
Uncertainty Management for Diagnostics and Prognostics of Batteries using Bayesian Techniques
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, kai
2007-01-01
Uncertainty management has always been the key hurdle faced by diagnostics and prognostics algorithms. A Bayesian treatment of this problem provides an elegant and theoretically sound approach to the modern Condition- Based Maintenance (CBM)/Prognostic Health Management (PHM) paradigm. The application of the Bayesian techniques to regression and classification in the form of Relevance Vector Machine (RVM), and to state estimation as in Particle Filters (PF), provides a powerful tool to integrate the diagnosis and prognosis of battery health. The RVM, which is a Bayesian treatment of the Support Vector Machine (SVM), is used for model identification, while the PF framework uses the learnt model, statistical estimates of noise and anticipated operational conditions to provide estimates of remaining useful life (RUL) in the form of a probability density function (PDF). This type of prognostics generates a significant value addition to the management of any operation involving electrical systems.
Indirect boundary force measurements in beam-like structures using a derivative estimator
NASA Astrophysics Data System (ADS)
Chesne, Simon
2014-12-01
This paper proposes a new method for the identification of boundary forces (shear force or bending moment) in a beam, based on displacement measurements. The problem is considered in terms of the determination of the boundary spatial derivatives of transverse displacements. By assuming the displacement fields to be approximated by Taylor expansions in a domain close to the boundaries, the spatial derivatives can be estimated using specific point-wise derivative estimators. This approach makes it possible to extract the derivatives using a weighted spatial integration of the displacement field. Following the theoretical description, numerical simulations made with exact and noisy data are used to determine the relationship between the size of the integration domain and the wavelength of the vibrations. The simulations also highlight the self-regularization of the technique. Experimental measurements demonstrate the feasibility and accuracy of the proposed method.
Toward Automatic Verification of Goal-Oriented Flow Simulations
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2014-01-01
We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, J.J. Jr.; Hyder, Z.
The Nguyen and Pinder method is one of four techniques commonly used for analysis of response data from slug tests. Limited field research has raised questions about the reliability of the parameter estimates obtained with this method. A theoretical evaluation of this technique reveals that errors were made in the derivation of the analytical solution upon which the technique is based. Simulation and field examples show that the errors result in parameter estimates that can differ from actual values by orders of magnitude. These findings indicate that the Nguyen and Pinder method should no longer be a tool in themore » repertoire of the field hydrogeologist. If data from a slug test performed in a partially penetrating well in a confined aquifer need to be analyzed, recent work has shown that the Hvorslev method is the best alternative among the commonly used techniques.« less
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Bailit, Howard L; Beazoglou, Tryfon J; DeVitto, Judy; McGowan, Taegen; Myne-Joslin, Veronica
2012-08-01
In many developed countries, the primary role of dental therapists is to care for children in school clinics. This article describes Federally Qualified Health Center (FQHC)-run, school-based dental programs in Connecticut and explores the theoretical financial impact of substituting dental therapists for dentists in these programs. In schools, dental hygienists screen children and provide preventive services, using portable equipment and temporary space. Children needing dentist services are referred to FQHC clinics or to FQHC-employed dentists who provide care in schools. The primary findings of this study are that school-based programs have considerable potential to reduce access disparities and the estimated reduction in per patient costs approaches 50 percent versus providing care in FQHC dental clinics. In terms of substituting dental therapists for dentists, the estimated additional financial savings was found to be about 5 percent. Nationally, FQHC-operated, school-based dental programs have the potential to increase Medicaid/CHIP utilization from the current 40 percent to 60 percent for a relatively modest increase in total expenditures.
Uncertainties in modeling low-energy neutrino-induced reactions on iron-group nuclei
NASA Astrophysics Data System (ADS)
Paar, N.; Suzuki, T.; Honma, M.; Marketin, T.; Vretenar, D.
2011-10-01
Charged-current neutrino-nucleus cross sections for 54,56Fe and 58,60Ni are calculated and compared using frameworks based on relativistic and Skyrme energy-density functionals and on the shell model. The current theoretical uncertainties in modeling neutrino-nucleus cross sections are assessed in relation to the predicted Gamow-Teller transition strength and available data, to multipole decomposition of the cross sections, and to cross sections averaged over the Michel flux and Fermi-Dirac distribution. By employing different microscopic approaches and models, the decay-at-rest (DAR) neutrino-56Fe cross section and its theoretical uncertainty are estimated to be <σ>th=(258±57)×10-42cm2, in very good agreement with the experimental value <σ>exp=(256±108±43)×10-42cm2.
Use of borated polyethylene to improve low energy response of a prompt gamma based neutron dosimeter
NASA Astrophysics Data System (ADS)
Priyada, P.; Ashwini, U.; Sarkar, P. K.
2016-05-01
The feasibility of using a combined sample of borated polyethylene and normal polyethylene to estimate neutron ambient dose equivalent from measured prompt gamma emissions is investigated theoretically to demonstrate improvements in low energy neutron dose response compared to only polyethylene. Monte Carlo simulations have been carried out using the FLUKA code to calculate the response of boron, hydrogen and carbon prompt gamma emissions to mono energetic neutrons. The weighted least square method is employed to arrive at the best linear combination of these responses that approximates the ICRP fluence to dose conversion coefficients well in the energy range of 10-8 MeV to 14 MeV. The configuration of the combined system is optimized through FLUKA simulations. The proposed method is validated theoretically with five different workplace neutron spectra with satisfactory outcome.
Evolution equation for quantum coherence
Hu, Ming-Liang; Fan, Heng
2016-01-01
The estimation of the decoherence process of an open quantum system is of both theoretical significance and experimental appealing. Practically, the decoherence can be easily estimated if the coherence evolution satisfies some simple relations. We introduce a framework for studying evolution equation of coherence. Based on this framework, we prove a simple factorization relation (FR) for the l1 norm of coherence, and identified the sets of quantum channels for which this FR holds. By using this FR, we further determine condition on the transformation matrix of the quantum channel which can support permanently freezing of the l1 norm of coherence. We finally reveal the universality of this FR by showing that it holds for many other related coherence and quantum correlation measures. PMID:27382933
Delgado, J; Liao, J C
1992-01-01
The methodology previously developed for determining the Flux Control Coefficients [Delgado & Liao (1992) Biochem. J. 282, 919-927] is extended to the calculation of metabolite Concentration Control Coefficients. It is shown that the transient metabolite concentrations are related by a few algebraic equations, attributed to mass balance, stoichiometric constraints, quasi-equilibrium or quasi-steady states, and kinetic regulations. The coefficients in these relations can be estimated using linear regression, and can be used to calculate the Control Coefficients. The theoretical basis and two examples are discussed. Although the methodology is derived based on the linear approximation of enzyme kinetics, it yields reasonably good estimates of the Control Coefficients for systems with non-linear kinetics. PMID:1497632
Kekenes-Huskey, P. M.; Gillette, A.; Hake, J.; McCammon, J. A.
2012-01-01
We introduce a computational pipeline and suite of software tools for the approximation of diffusion-limited binding based on a recently developed theoretical framework. Our approach handles molecular geometries generated from high-resolution structural data and can account for active sites buried within the protein or behind gating mechanisms. Using tools from the FEniCS library and the APBS solver, we implement a numerical code for our method and study two Ca2+-binding proteins: Troponin C and the Sarcoplasmic Reticulum Ca2+ ATPase (SERCA). We find that a combination of diffusional encounter and internal ‘buried channel’ descriptions provide superior descriptions of association rates, improving estimates by orders of magnitude. PMID:23293662
Kekenes-Huskey, P M; Gillette, A; Hake, J; McCammon, J A
2012-10-31
We introduce a computational pipeline and suite of software tools for the approximation of diffusion-limited binding based on a recently developed theoretical framework. Our approach handles molecular geometries generated from high-resolution structural data and can account for active sites buried within the protein or behind gating mechanisms. Using tools from the FEniCS library and the APBS solver, we implement a numerical code for our method and study two Ca(2+)-binding proteins: Troponin C and the Sarcoplasmic Reticulum Ca(2+) ATPase (SERCA). We find that a combination of diffusional encounter and internal 'buried channel' descriptions provide superior descriptions of association rates, improving estimates by orders of magnitude.
NASA Astrophysics Data System (ADS)
Kekenes-Huskey, P. M.; Gillette, A.; Hake, J.; McCammon, J. A.
2012-01-01
We introduce a computational pipeline and suite of software tools for the approximation of diffusion-limited binding based on a recently developed theoretical framework. Our approach handles molecular geometries generated from high-resolution structural data and can account for active sites buried within the protein or behind gating mechanisms. Using tools from the FEniCS library and the APBS solver, we implement a numerical code for our method and study two Ca2+-binding proteins: troponin C and the sarcoplasmic reticulum Ca2+ ATPase. We find that a combination of diffusional encounter and internal ‘buried channel’ descriptions provides superior descriptions of association rates, improving estimates by orders of magnitude.
Rowe, Penny M; Neshyba, Steven P; Walden, Von P
2011-03-14
An analytical expression for the variance of the radiance measured by Fourier-transform infrared (FTIR) emission spectrometers exists only in the limit of low noise. Outside this limit, the variance needs to be calculated numerically. In addition, a criterion for low noise is needed to identify properly calibrated radiances and optimize the instrument bandwidth. In this work, the variance and the magnitude of a noise-dependent spectral bias are calculated as a function of the system responsivity (r) and the noise level in its estimate (σr). The criterion σr/r<0.3, applied to downwelling and upwelling FTIR emission spectra, shows that the instrument bandwidth is specified properly for one instrument but needs to be restricted for another.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Tinker, M. Timothy; Doak, Daniel F.; Estes, James A.; Hatfield, Brian B.; Staedler, Michelle M.; Gross, Arthur
2006-01-01
Reliable information on historical and current population dynamics is central to understanding patterns of growth and decline in animal populations. We developed a maximum likelihood-based analysis to estimate spatial and temporal trends in age/sex-specific survival rates for the threatened southern sea otter (Enhydra lutris nereis), using annual population censuses and the age structure of salvaged carcass collections. We evaluated a wide range of possible spatial and temporal effects and used model averaging to incorporate model uncertainty into the resulting estimates of key vital rates and their variances. We compared these results to current demographic parameters estimated in a telemetry-based study conducted between 2001 and 2004. These results show that survival has decreased substantially from the early 1990s to the present and is generally lowest in the north-central portion of the population's range. The greatest temporal decrease in survival was for adult females, and variation in the survival of this age/sex class is primarily responsible for regulating population growth and driving population trends. Our results can be used to focus future research on southern sea otters by highlighting the life history stages and mortality factors most relevant to conservation. More broadly, we have illustrated how the powerful and relatively straightforward tools of information-theoretic-based model fitting can be used to sort through and parameterize quite complex demographic modeling frameworks. ?? 2006 by the Ecological Society of America.
Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation
Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-01-01
Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182
Deductive derivation and turing-computerization of semiparametric efficient estimation.
Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-12-01
Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Papanastasiou, Dimitrios K.; Beltrone, Allison; Marshall, Paul; Burkholder, James B.
2018-05-01
Hydrochlorofluorocarbons (HCFCs) are ozone depleting substances and potent greenhouse gases that are controlled under the Montreal Protocol. However, the majority of the 274 HCFCs included in Annex C of the protocol do not have reported global warming potentials (GWPs) which are used to guide the phaseout of HCFCs and the future phase down of hydrofluorocarbons (HFCs). In this study, GWPs for all C1-C3 HCFCs included in Annex C are reported based on estimated atmospheric lifetimes and theoretical methods used to calculate infrared absorption spectra. Atmospheric lifetimes were estimated from a structure activity relationship (SAR) for OH radical reactivity and estimated O(1D) reactivity and UV photolysis loss processes. The C1-C3 HCFCs display a wide range of lifetimes (0.3 to 62 years) and GWPs (5 to 5330, 100-year time horizon) dependent on their molecular structure and the H-atom content of the individual HCFC. The results from this study provide estimated policy-relevant GWP metrics for the HCFCs included in the Montreal Protocol in the absence of experimentally derived metrics.
NASA Astrophysics Data System (ADS)
Wang, Kaicun; Dickinson, Robert E.
2012-06-01
This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or λE, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change.
Total cross section of furfural by electron impact: Experiment and theory.
Traoré Dubuis, A; Verkhovtsev, A; Ellis-Gibbings, L; Krupa, K; Blanco, F; Jones, D B; Brunger, M J; García, G
2017-08-07
We present experimental total cross sections for electron scattering from furfural in the energy range from 10 to 1000 eV, as measured using a double electrostatic analyzer gas cell electron transmission experiment. These results are compared to theoretical data for furfural, as well as to experimental and theoretical values for the structurally similar molecules furan and tetrahydrofuran. The measured total cross section is in agreement with the theoretical results obtained by means of the independent-atom model with screening corrected additivity rule including interference method. In the region of higher electron energies, from 500 eV to 10 keV, the total electron scattering cross section is also estimated using a semi-empirical model based on the number of electrons and dipole polarizabilities of the molecular targets. Together with the recently measured differential and integral cross sections, and the furfural energy-loss spectra, the present total cross section data nearly complete the data set that is required for numerical simulation of low-energy electron processes in furfural, covering the range of projectile energies from a few electron volts up to 10 keV.
NASA Astrophysics Data System (ADS)
Uchida, Yuji; Taguchi, Tsunemasa
2003-07-01
We have performed theoretical studies on the luminous characeristics of white LED light source which composed of multi phosphors and near ultraviolet (UV) LED for general lighting. White LED source for general lighting applications requires the conditions that have high-flux, high luminous efficacy of radiation (> 100 lm/W) in addition to high color rendering index (Ra > 90) and variable color temperatures. Recently, we have proposed a novel type white LED based on multi phosphors and near UV LED system in order to high-Ra (>93). We will describe the excellent luminescence properties of white LED consisting of orange (O), yellow (Y), green (G) and blue (B) phosphor materials, and near UV LED. The color spectral contributions of individual phosphor-coated LED are theoretically analyzed using our multi LED lighting theory calculated the maximum luminous efficacy can be estimated to be approximately 300 lm/W having a high Ra of about 90 taking into account individual radiation spectrum. Illuminance distribution of white LED is in fairly good agreement with the experimental data.
Experimental and theoretical investigations of H2O-Ar
NASA Astrophysics Data System (ADS)
Vanfleteren, Thomas; Földes, Tomas; Herman, Michel; Liévin, Jacques; Loreau, Jérôme; Coudert, Laurent H.
2017-07-01
We have used continuous-wave cavity ring-down spectroscopy to record the spectrum of H2O A r in the 2OH excitation range of H2O . 24 sub-bands have been observed. Their rotational structure (Trot = 12 K) is analyzed and the lines are fitted separately for ortho and para species together with microwave and far infrared data from the literature, with a unitless standard deviation σ =0.98 and 1.31, respectively. Their vibrational analysis is supported by a theoretical input based on an intramolecular potential energy surface obtained through ab initio calculations and computation of the rotational energy of sub-states of the complex with the water monomer in excited vibrational states up to the first hexad. For the ground and (010) vibrational states, the theoretical results agree well with experimental energies and rotational constants in the literature. For the excited vibrational states of the first hexad, they guided the assignment of the observed sub-bands. The upper state vibrational predissociation lifetime is estimated to be 3 ns from observed spectral linewidths.
Total cross section of furfural by electron impact: Experiment and theory
NASA Astrophysics Data System (ADS)
Traoré Dubuis, A.; Verkhovtsev, A.; Ellis-Gibbings, L.; Krupa, K.; Blanco, F.; Jones, D. B.; Brunger, M. J.; García, G.
2017-08-01
We present experimental total cross sections for electron scattering from furfural in the energy range from 10 to 1000 eV, as measured using a double electrostatic analyzer gas cell electron transmission experiment. These results are compared to theoretical data for furfural, as well as to experimental and theoretical values for the structurally similar molecules furan and tetrahydrofuran. The measured total cross section is in agreement with the theoretical results obtained by means of the independent-atom model with screening corrected additivity rule including interference method. In the region of higher electron energies, from 500 eV to 10 keV, the total electron scattering cross section is also estimated using a semi-empirical model based on the number of electrons and dipole polarizabilities of the molecular targets. Together with the recently measured differential and integral cross sections, and the furfural energy-loss spectra, the present total cross section data nearly complete the data set that is required for numerical simulation of low-energy electron processes in furfural, covering the range of projectile energies from a few electron volts up to 10 keV.
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Measuring the effects of heat wave episodes on the human body's thermal balance
NASA Astrophysics Data System (ADS)
Katavoutas, George; Theoharatos, George; Flocas, Helena A.; Asimakopoulos, Dimosthenis N.
2009-03-01
During the peak of an extensive heat wave episode on 23-25 July 2007, simultaneous thermophysiological measurements were made in two non-acclimated healthy adults of different sex in a suburban area of Greater Athens, Greece. Based on experimental measurements of mean skin temperature and metabolic heat production, heat fluxes to and from the human body were calculated, and the biometeorological index heat load (HL) produced was determined according to the heat balance equation. Comparing experimental values with those derived from theoretical estimates revealed a great heat stress for both individuals, especially the male, while theoretical values underestimated heat stress. The study also revealed that thermophysiological factors, such as mean skin temperature and metabolic heat production, play an important role in determining heat fluxes patterns in the heat balance equation. The theoretical values of mean skin temperature as derived from an empirical equation may not be appropriate to describe the changes that take place in a non-acclimated individual. Furthermore, the changes in metabolic heat production were significant even for standard activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoegele, W.; Loeschel, R.; Dobler, B.
2011-02-15
Purpose: In this work, a novel stochastic framework for patient positioning based on linac-mounted CB projections is introduced. Based on this formulation, the most probable shifts and rotations of the patient are estimated, incorporating interfractional deformations of patient anatomy and other uncertainties associated with patient setup. Methods: The target position is assumed to be defined by and is stochastically determined from positions of various features such as anatomical landmarks or markers in CB projections, i.e., radiographs acquired with a CB-CT system. The patient positioning problem of finding the target location from CB projections is posed as an inverse problem withmore » prior knowledge and is solved using a Bayesian maximum a posteriori (MAP) approach. The prior knowledge is three-fold and includes the accuracy of an initial patient setup (such as in-room laser and skin marks), the plasticity of the body (relative shifts between target and features), and the feature detection error in CB projections (which may vary depending on specific detection algorithm and feature type). For this purpose, MAP estimators are derived and a procedure of using them in clinical practice is outlined. Furthermore, a rule of thumb is theoretically derived, relating basic parameters of the prior knowledge (initial setup accuracy, plasticity of the body, and number of features) and the parameters of CB data acquisition (number of projections and accuracy of feature detection) to the expected estimation accuracy. Results: MAP estimation can be applied to arbitrary features and detection algorithms. However, to experimentally demonstrate its applicability and to perform the validation of the algorithm, a water-equivalent, deformable phantom with features represented by six 1 mm chrome balls were utilized. These features were detected in the cone beam projections (XVI, Elekta Synergy) by a local threshold method for demonstration purposes only. The accuracy of estimation (strongly varying for different plasticity parameters of the body) agreed with the rule of thumb formula. Moreover, based on this rule of thumb formula, about 20 projections for 6 detectable features seem to be sufficient for a target estimation accuracy of 0.2 cm, even for relatively large feature detection errors with standard deviation of 0.5 cm and spatial displacements of the features with standard deviation of 0.5 cm. Conclusions: The authors have introduced a general MAP-based patient setup algorithm accounting for different sources of uncertainties, which are utilized as the prior knowledge in a transparent way. This new framework can be further utilized for different clinical sites, as well as theoretical developments in the field of patient positioning for radiotherapy.« less
Theoretical nuclear database for high-energy, heavy-ion (HZE) transport
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Wilson, J. W.
1995-01-01
Theoretical methods for estimating high-energy, heavy-ion (HZE) particle absorption and fragmentation cross-sections are described and compared with available experimental data. Differences between theory and experiment range from several percent for absorption cross-sections up to about 25%-50% for fragmentation cross-sections.
NASA Technical Reports Server (NTRS)
Wood, R. M.; Miller, D. S.; Brentner, K. S.
1983-01-01
A theoretical and experimental investigation has been conducted to evaluate the fundamental supersonic aerodynamic characteristics of a generic twin-body model at a Mach number of 2.70. Results show that existing aerodynamic prediction methods are adequate for making preliminary aerodynamic estimates.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
Convex Banding of the Covariance Matrix
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
Gravitational wave searches using the DSN (Deep Space Network)
NASA Technical Reports Server (NTRS)
Nelson, S. J.; Armstrong, J. W.
1988-01-01
The Deep Space Network Doppler spacecraft link is currently the only method available for broadband gravitational wave searches in the 0.01 to 0.001 Hz frequency range. The DSN's role in the worldwide search for gravitational waves is described by first summarizing from the literature current theoretical estimates of gravitational wave strengths and time scales from various astrophysical sources. Current and future detection schemes for ground based and space based detectors are then discussed. Past, present, and future planned or proposed gravitational wave experiments using DSN Doppler tracking are described. Lastly, some major technical challenges to improve gravitational wave sensitivities using the DSN are discussed.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Unal, Resit
1991-01-01
Designing for cost is a state of mind. Of course, a lot of technical knowledge is required and the use of appropriate tools will improve the process. Unfortunately, the extensive use of weight based cost estimating relationships has generated a perception in the aerospace community that the primary way to reduce cost is to reduce weight. Wrong! Based upon an approximation of an industry accepted formula, the PRICE H (tm) production-production equation, Dean demonstrated theoretically that the optimal trajectory for cost reduction is predominantly in the direction of system complexity reduction, not system weight reduction. Thus the phrase "keep it simple" is a primary state of mind required for reducing cost throughout the design process.
Density functional theory and phytochemical study of 8-hydroxyisodiospyrin
NASA Astrophysics Data System (ADS)
Ullah, Zakir; Ata-ur-Rahman; Fazl-i-Sattar; Rauf, Abdur; Yaseen, Muhammad; Hassan, Waseem; Tariq, Muhammad; Ayub, Khurshid; Tahir, Asif Ali; Ullah, Habib
2015-09-01
Comprehensive theoretical and experimental studies of a natural product, 8-hydroxyisodiospyrin (HDO) have been carried out. Based on the correlation of experimental and theoretical data, an appropriate computational model was developed for obtaining the electronic, spectroscopic, and thermodynamic parameters of HDO. First of all, the exact structure of HDO is confirmed from the nice correlation of theory and experiment, prior to determination of its electroactive nature. Hybrid density functional theory (DFT) is employed for all theoretical simulations. The experimental and predicted IR and UV-vis spectra [B3LYP/6-31+G(d,p) level of theory] have excellent correlation. Inter-molecular non-covalent interaction of HDO with different gases such as NH3, CO2, CO, H2O is investigated through geometrical counterpoise (gCP) i.e., B3LYP-gCP-D3/6-31G∗ method. Furthermore, the inter-molecular interaction is also supported by geometrical parameters, electronic properties, thermodynamic parameters and charge analysis. All these characterizations have corroborated each other and confirmed the electroactive nature (non-covalent interaction ability) of HDO for the studied gases. Electronic properties such as Ionization Potential (IP), Electron Affinities (EA), electrostatic potential (ESP), density of states (DOS), HOMO, LUMO, and band gap of HDO have been estimated for the first time theoretically.
Theoretical Effects of Substituting Butter with Margarine on Risk of Cardiovascular Disease.
Liu, Qing; Rossouw, Jacques E; Roberts, Mary B; Liu, Simin; Johnson, Karen C; Shikany, James M; Manson, JoAnn E; Tinker, Lesley F; Eaton, Charles B
2017-01-01
Several recent articles have called into question the deleterious effects of high animal fat diets due to mixed results from epidemiologic studies and the lack of clinical trial evidence in meta-analyses of dietary intervention trials. We were interested in examining the theoretical effects of substituting plant-based fats from different types of margarine for animal-based fat from butter on the risk of atherosclerosis-related cardiovascular disease (CVD). We prospectively studied 71,410 women, aged 50-79 years, and evaluated their risk for clinical myocardial infarction (MI), total coronary heart disease (CHD), ischemic stroke, and atherosclerosis-related CVD with an average of 13.2 years of follow-up. Butter and margarine intakes were obtained at baseline and year 3 by means of a validated food frequency questionnaire. Cox proportional hazards regression using a cumulative average diet method was used to estimate the theoretical effect of substituting 1 teaspoon/day of three types of margarine for the same amount of butter. Substituting butter or stick margarine with tub margarine was associated with lower risk of MI (HRs = 0.95 and 0.91). Subgroup analyses, which evaluated these substitutions among participants with a single source of spreadable fat, showed stronger associations for MI (HRs = 0.92 and 0.87). Outcomes of total CHD, ischemic stroke, and atherosclerosis-related CVD showed wide confidence intervals but the same trends as the MI results. This theoretical dietary substitution analysis suggests that substituting butter and stick margarine with tub margarine when spreadable fats are eaten may be associated with reduced risk of myocardial infarction.
Theoretical effects of substituting butter with margarine on risk of cardiovascular disease
Liu, Qing; Rossouw, Jacques E.; Roberts, Mary B.; Liu, Simin; Johnson, Karen C.; Shikany, James M.; Manson, JoAnn E.; Tinker, Lesley F.; Eaton, Charles B.
2017-01-01
Background Several recent papers have called into question the deleterious effects of high animal fat diets due to mixed results from epidemiologic studies and the lack of clinical trial evidence in meta-analyses of dietary intervention trials. We were interested in examining the theoretical effects of substituting plant-based fats from different types of margarine for animal based fat from butter on the risk of atherosclerosis-related cardiovascular disease (CVD). Methods We prospectively studied 71,410 women, aged 50–79 years, and evaluated their risk for clinical myocardial infarction (MI), total coronary heart disease (CHD), ischemic stroke and atherosclerosis-related CVD with an average of 13.2 years of follow-up. Butter and margarine intakes were obtained at baseline and Year 3 by means of a validated food frequency questionnaire. Cox proportional hazards regression using a cumulative average diet method was used to estimate the theoretical effect of substituting 1 teaspoon/day of three types of margarine for the same amount of butter. Results Substituting butter or stick margarine with tub margarine was associated with lower risk of MI (HRs=0.95 and 0.91). Subgroup analyses, which evaluated these substitutions among participants with a single source of spreadable fat, showed stronger associations for MI (HRs=0.92 and 0.87). Outcomes of total CHD, ischemic stroke, and atherosclerosis-related CVD showed wide confidence intervals but the same trends as the MI results. Conclusions This theoretical dietary substitution analysis suggests that substituting butter and stick margarine with tub margarine when spreadable fats are eaten may be associated with reduced risk of myocardial infarction. PMID:27648593
Survival estimation and the effects of dependency among animals
Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.
1995-01-01
Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.
Kappesser, Judith; de C Williams, Amanda C
2008-08-01
Observer underestimation of others' pain was studied using a concept from evolutionary psychology: a cheater detection mechanism from social contract theory, applied to relatives and friends of chronic pain patients. 127 participants estimated characters' pain intensity and fairness of behaviour after reading four vignettes describing characters suffering from pain. Four cues were systematically varied: the character continuing or stopping liked tasks; continuing or stopping disliked tasks; availability of medical evidence; and pain intensity as rated by characters. Results revealed that pain intensity and the two behavioural variables had an effect on pain estimates: high pain self-reports and stopping all tasks led to high pain estimates; pain was estimated to be lowest when characters stopped disliked but continued with liked tasks. This combination was also rated least fair. Results support the use of social contract theory as a theoretical framework to explore pain judgements.
Tuo, Rui; Jeff Wu, C. F.
2016-07-19
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
Sparse PCA with Oracle Property.
Gu, Quanquan; Wang, Zhaoran; Liu, Han
In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.
Sparse PCA with Oracle Property
Gu, Quanquan; Wang, Zhaoran; Liu, Han
2014-01-01
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971
NASA Astrophysics Data System (ADS)
Pandey, Manoj Kumar; Ramachandran, Ramesh
2010-03-01
The application of solid-state NMR methodology for bio-molecular structure determination requires the measurement of constraints in the form of 13C-13C and 13C-15N distances, torsion angles and, in some cases, correlation of the anisotropic interactions. Since the availability of structurally important constraints in the solid state is limited due to lack of sufficient spectral resolution, the accuracy of the measured constraints become vital in studies relating the three-dimensional structure of proteins to its biological functions. Consequently, the theoretical methods employed to quantify the experimental data become important. To accentuate this aspect, we re-examine analytical two-spin models currently employed in the estimation of 13C-13C distances based on the rotational resonance (R 2) phenomenon. Although the error bars for the estimated distances tend to be in the range 0.5-1.0 Å, R 2 experiments are routinely employed in a variety of systems ranging from simple peptides to more complex amyloidogenic proteins. In this article we address this aspect by highlighting the systematic errors introduced by analytical models employing phenomenological damping terms to describe multi-spin effects. Specifically, the spin dynamics in R 2 experiments is described using Floquet theory employing two different operator formalisms. The systematic errors introduced by the phenomenological damping terms and their limitations are elucidated in two analytical models and analysed by comparing the results with rigorous numerical simulations.
Bradley, Beverly D.; Howie, Stephen R. C.; Chan, Timothy C. Y.; Cheng, Yu-Ling
2014-01-01
Background Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or ‘demand’ for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability. Methods A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons. Findings Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality. Conclusion A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely applicable to other areas of resource and technology planning in developing country health systems. PMID:24587089
Estimating trends in atmospheric water vapor and temperature time series over Germany
NASA Astrophysics Data System (ADS)
Alshawaf, Fadwa; Balidakis, Kyriakos; Dick, Galina; Heise, Stefan; Wickert, Jens
2017-08-01
Ground-based GNSS (Global Navigation Satellite System) has efficiently been used since the 1990s as a meteorological observing system. Recently scientists have used GNSS time series of precipitable water vapor (PWV) for climate research. In this work, we compare the temporal trends estimated from GNSS time series with those estimated from European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) data and meteorological measurements. We aim to evaluate climate evolution in Germany by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: (1) estimated from ground-based GNSS observations using the method of precise point positioning, (2) inferred from ERA-Interim reanalysis data, and (3) determined based on daily in situ measurements of temperature and relative humidity. The other relevant atmospheric parameters are available from surface measurements of meteorological stations or derived from ERA-Interim. The trends are estimated using two methods: the first applies least squares to deseasonalized time series and the second uses the Theil-Sen estimator. The trends estimated at 113 GNSS sites, with 10 to 19 years temporal coverage, vary between -1.5 and 2.3 mm decade-1 with standard deviations below 0.25 mm decade-1. These results were validated by estimating the trends from ERA-Interim data over the same time windows, which show similar values. These values of the trend depend on the length and the variations of the time series. Therefore, to give a mean value of the PWV trend over Germany, we estimated the trends using ERA-Interim spanning from 1991 to 2016 (26 years) at 227 synoptic stations over Germany. The ERA-Interim data show positive PWV trends of 0.33 ± 0.06 mm decade-1 with standard errors below 0.03 mm decade-1. The increment in PWV varies between 4.5 and 6.5 % per degree Celsius rise in temperature, which is comparable to the theoretical rate of the Clausius-Clapeyron equation.
Dispersion curve estimation via a spatial covariance method with ultrasonic wavefield imaging.
Chong, See Yenn; Todd, Michael D
2018-05-01
Numerous Lamb wave dispersion curve estimation methods have been developed to support damage detection and localization strategies in non-destructive evaluation/structural health monitoring (NDE/SHM) applications. In this paper, the covariance matrix is used to extract features from an ultrasonic wavefield imaging (UWI) scan in order to estimate the phase and group velocities of S0 and A0 modes. A laser ultrasonic interrogation method based on a Q-switched laser scanning system was used to interrogate full-field ultrasonic signals in a 2-mm aluminum plate at five different frequencies. These full-field ultrasonic signals were processed in three-dimensional space-time domain. Then, the time-dependent covariance matrices of the UWI were obtained based on the vector variables in Cartesian and polar coordinate spaces for all time samples. A spatial covariance map was constructed to show spatial correlations within the full wavefield. It was observed that the variances may be used as a feature for S0 and A0 mode properties. The phase velocity and the group velocity were found using a variance map and an enveloped variance map, respectively, at five different frequencies. This facilitated the estimation of Lamb wave dispersion curves. The estimated dispersion curves of the S0 and A0 modes showed good agreement with the theoretical dispersion curves. Copyright © 2018 Elsevier B.V. All rights reserved.
The Problems of Multiple Feedback Estimation.
ERIC Educational Resources Information Center
Bulcock, Jeffrey W.
The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…
ERIC Educational Resources Information Center
Bulcock, J. W.; And Others
Advantages of normalization regression estimation over ridge regression estimation are demonstrated by reference to Bloom's model of school learning. Theoretical concern centered on the structure of scholastic achievement at grade 10 in Canadian high schools. Data on 886 students were randomly sampled from the Carnegie Human Resources Data Bank.…
Economic impacts of hurricanes on forest owners
Jeffrey P. Prestemon; Thomas P. Holmes
2010-01-01
We present a conceptual model of the economic impacts of hurricanes on timber producers and consumers, offer a framework indicating how welfare impacts can be estimated using econometric estimates of timber price dynamics, and illustrate the advantages of using a welfare theoretic model, which includes (1) welfare estimates that are consistent with neo-classical...
What is the danger of the anomaly zone for empirical phylogenetics?
Huang, Huateng; Knowles, L Lacey
2009-10-01
The increasing number of observations of gene trees with discordant topologies in phylogenetic studies has raised awareness about the problems of incongruence between species trees and gene trees. Moreover, theoretical treatments focusing on the impact of coalescent variance on phylogenetic study have also identified situations where the most probable gene trees are ones that do not match the underlying species tree (i.e., anomalous gene trees [AGTs]). However, although the theoretical proof of the existence of AGTs is alarming, the actual risk that AGTs pose to empirical phylogenetic study is far from clear. Establishing the conditions (i.e., the branch lengths in a species tree) for which AGTs are possible does not address the critical issue of how prevalent they might be. Furthermore, theoretical characterization of the species trees for which AGTs may pose a problem (i.e., the anomaly zone or the species histories for which AGTs are theoretically possible) is based on consideration of just one source of variance that contributes to species tree and gene tree discord-gene lineage coalescence. Yet, empirical data contain another important stochastic component-mutational variance. Estimated gene trees will differ from the underlying gene trees (i.e., the actual genealogy) because of the random process of mutation. Here, we take a simulation approach to investigate the prevalence of AGTs, among estimated gene trees, thereby characterizing the boundaries of the anomaly zone taking into account both coalescent and mutational variances. We also determine the frequency of realized AGTs, which is critical to putting the theoretical work on AGTs into a realistic biological context. Two salient results emerge from this investigation. First, our results show that mutational variance can indeed expand the parameter space (i.e., the relative branch lengths in a species tree) where AGTs might be observed in empirical data. By exploring the underlying cause for the expanded anomaly zone, we identify aspects of empirical data relevant to avoiding the problems that AGTs pose for species tree inference from multilocus data. Second, for the empirical species histories where AGTs are possible, unresolved trees-not AGTs-predominate the pool of estimated gene trees. This result suggests that the risk of AGTs, while they exist in theory, may rarely be realized in practice. By considering the biological realities of both mutational and coalescent variances, the study has refined, and redefined, what the actual challenges are for empirical phylogenetic study of recently diverged taxa that have speciated rapidly-AGTs themselves are unlikely to pose a significant danger to empirical phylogenetic study.
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Michael
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less
Observations of the Geometry of Horizon-Based Optical Navigation
NASA Technical Reports Server (NTRS)
Christian, John; Robinson, Shane
2016-01-01
NASA's Orion Project has sparked a renewed interest in horizon-based optical navigation(OPNAV) techniques for spacecraft in the Earth-Moon system. Some approaches have begun to explore the geometry of horizon-based OPNAV and exploit the fact that it is a conic section problem. Therefore, the present paper focuses more deeply on understanding and leveraging the various geometric interpretations of horizon-based OPNAV. These results provide valuable insight into the fundamental workings of OPNAV solution methods, their convergence properties, and associated estimate covariance. Most importantly, the geometry and transformations uncovered in this paper lead to a simple and non-iterative solution to the generic horizon-based OPNAV problem. This represents a significant theoretical advancement over existing methods. Thus, we find that a clear understanding of geometric relationships is central to the prudent design, use, and operation of horizon-based OPNAV techniques.
Ro, Kyoung S; Szogi, Ariel A; Moore, Philip A
2018-05-12
In-house windrowing between flocks is an emerging sanitary management practice to partially disinfect the built-up litter in broiler houses. However, this practice may also increase ammonia (NH 3 ) emission from the litter due to the increase in litter temperature. The objectives of this study were to develop mathematical models to estimate NH 3 emission rates from broiler houses practicing in-house windrowing between flocks. Equations to estimate mass-transfer areas form different shapes windrowed litter (triangular, rectangular, and semi-cylindrical prisms) were developed. Using these equations, the heights of windrows yielding the smallest mass-transfer area were estimated. Smaller mass-transfer area is preferred as it reduces both emission rates and heat loss. The heights yielding the minimum mass-transfer area were 0.8 and 0.5 m for triangular and rectangular windrows, respectively. Only one height (0.6 m) was theoretically possible for semi-cylindrical windrows because the base and the height were not independent. Mass-transfer areas were integrated with published process-based mathematical models to estimate the total house NH 3 emission rates during in-house windrowing of poultry litter. The NH 3 emission rate change calculated from the integrated model compared well with the observed values except for the very high NH 3 initial emission rate from mechanically disturbing the litter to form the windrows. This approach can be used to conveniently estimate broiler house NH 3 emission rates during in-house windrowing between flocks by simply measuring litter temperatures.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Adaptive State Predictor Based Human Operator Modeling on Longitudinal and Lateral Control
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.; Hempley, Lucas E.
2015-01-01
Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to categorize these interactions of the pilot with an adaptive controller compensating during control surface failures. A general linear in-parameter model structure is used to represent a pilot. Three different estimation methods are explored. A gradient descent estimator (GDE), a least squares estimator with exponential forgetting (LSEEF), and a least squares estimator with bounded gain forgetting (LSEBGF) used the experiment data to predict pilot stick input. Previous results have found that the GDE and LSEEF methods are fairly accurate in predicting longitudinal stick input from commanded pitch. This paper discusses the accuracy of each of the three methods - GDE, LSEEF, and LSEBGF - to predict both pilot longitudinal and lateral stick input from the flight director's commanded pitch and bank attitudes.
NASA Astrophysics Data System (ADS)
Jin, Wei; Zhang, Chongfu; Yuan, Weicheng
2016-02-01
We propose a physically enhanced secure scheme for direct detection-orthogonal frequency division multiplexing-passive optical network (DD-OFDM-PON) and long reach coherent detection-orthogonal frequency division multiplexing-passive optical network (LRCO-OFDM-PON), by employing noise-based encryption and channel/phase estimation. The noise data generated by chaos mapping are used to substitute training sequences in preamble to realize channel estimation and frame synchronization, and also to be embedded on variable number of key-selected randomly spaced pilot subcarriers to implement phase estimation. Consequently, the information used for signal recovery is totally hidden as unpredictable noise information in OFDM frames to mask useful information and to prevent illegal users from correctly realizing OFDM demodulation, and thereby enhancing resistance to attackers. The levels of illegal-decryption complexity and implementation complexity are theoretically discussed. Through extensive simulations, the performances of the proposed channel/phase estimation and the security introduced by encrypted pilot carriers have been investigated in both DD-OFDM and LRCO-OFDM systems. In addition, in the proposed secure DD-OFDM/LRCO-OFDM PON models, both legal and illegal receiving scenarios have been considered. These results show that, by utilizing the proposed scheme, the resistance to attackers can be significantly enhanced in DD-OFDM-PON and LRCO-OFDM-PON systems without performance degradations.
Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild
Broell, Franziska; Taggart, Christopher T.
2015-01-01
This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777