Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi
2015-01-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639
NASA Astrophysics Data System (ADS)
Zeweldi, D. A.; Gebremichael, M.; Summis, T.; Wang, J.; Miller, D.
2008-12-01
The large source of uncertainty in satellite-based evapotranspiration algorithm results from the estimation of sensible heat flux H. Traditionally eddy covariance sensors, and recently large-aperture scintillometers, have been used as ground truth to evaluate satellite-based H estimates. The two methods rely on different physical measurement principles, and represent different foot print sizes. In New Mexico, we conducted a field campaign during summer 2008 to compare H estimates obtained from the eddy covariance and scintillometer methods. During this field campaign, we installed sonic anemometers; one propeller eddy covariance (OPEC) equipped with net radiometer and soil heat flux sensors; large aperture scintillometer (LAS); and weather station consisting of wind speed, direction and radiation sensors over three different experimental areas consisting of different roughness conditions (desert, irrigated area and lake). Our results show the similarities and differences in H estimates obtained from these various methods over the different land surface conditions. Further, our results show that the H estimates obtained from the LAS agree with those obtained from the eddy covariance method when high frequency thermocouple temperature, instead of the typical weather station temperature measurements, is used in the LAS analysis.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.; Cascone, L.
2012-01-01
We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.
Løvendahl, Peter; Sørensen, Lars Peter; Bjerring, Martin; Lassen, Jan
2016-12-01
Dairy cows milked in automatic milking systems (AMS) with more than 1 milking box may, as individuals, have a preference for specific milking boxes if allowed free choice. Estimates of quantitative genetic variation in behavioral traits of farmed animals have previously been reported, with estimates of heritability ranging widely. However, for the consistency of choice in dairy cows, almost no published estimates of heritability exist. The hypothesis for this study was that choice consistency is partly under additive genetic control and partly controlled by permanent environmental (animal) effects. The aims of this study were to obtain estimates of genetic and phenotypic parameters for choice consistency in dairy cows milked in AMS herds. Data were obtained from 5 commercial Danish herds (I-V) with 2 AMS milking boxes (A, B). Milking data were only from milkings where both the present and the previous milkings were coded as completed. This filter was used to fulfill a criterion of free-choice situation (713,772 milkings, 1,231 cows). The lactation was divided into 20 segments covering 15d each, from 5 to 305d in milk. Choice consistency scores were obtained as the fraction of milkings without change of box [i.e., 1.0 - µ(box change)] for each segment. Data were analyzed for one part of lactation at a time using a linear mixed model for first-parity cows alone and for all parities jointly. Choice consistency was found to be only weakly heritable (heritability=0.02 to 0.14) in first as well as in later parities, and having intermediate repeatability (repeatability coefficients=0.27 to 0.56). Heritability was especially low at early and late lactation states. These results indicate that consistency, which is itself an indication of repeated similar choices, is also repeatable as a trait observed over longer time periods. However, the genetic background seems to play a smaller role compared with that of the permanent animal effects, indicating that consistency could also be a learned behavior. We concluded that consistency in choices are quantifiable, but only under weak genetic control. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Atmospheric Turbulence Estimates from a Pulsed Lidar
NASA Technical Reports Server (NTRS)
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
NASA Astrophysics Data System (ADS)
Raju, Subramanian; Saibaba, Saroja
2016-09-01
The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.
CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan
Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.
2006-06-01
Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Objectivity and validity of EMG method in estimating anaerobic threshold.
Kang, S-K; Kim, J; Kwon, M; Eom, H
2014-08-01
The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.
Estimation of risks to children from exposure to airborne pollutants is often complicated by the lack of reliable epidemiological data specific to this age group. As a result, risks are generally estimated from extrapolations based on data obtained in other human age groups (e.g....
ERIC Educational Resources Information Center
Burt, Martha R.
This report presents the results of a federally mandated study done to determine the best means of identifying, locating, and counting homeless children and youth, for the purpose of facilitating their successful participation in school and other educational activities. Several alternative approaches to obtaining consistent national estimates of…
Simultaneous Estimation of Overall and Domain Abilities: A Higher-Order IRT Model Approach
ERIC Educational Resources Information Center
de la Torre, Jimmy; Song, Hao
2009-01-01
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Estimation of spectral distribution of sky radiance using a commercial digital camera.
Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao
2016-01-10
Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.
Maximum-likelihood estimation of parameterized wavefronts from multifocal data
Sakamoto, Julia A.; Barrett, Harrison H.
2012-01-01
A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282
Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi
2015-07-01
Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.
iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees
Rosenberg, Noah A.
2012-01-01
Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756
Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.
Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P
2015-10-01
Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.
Comparison of Past, Present, and Future Volume Estimation Methods for Tennessee
Stanley J. Zarnoch; Alexander Clark; Ray A. Souter
2003-01-01
Forest Inventory and Analysis 1999 survey data for Tennessee were used to compare stem-volume estimates obtained using a previous method, the current method, and newly developed taper models that will be used in the future. Compared to the current method, individual tree volumes were consistently underestimated with the previous method, especially for the hardwoods....
A visual training tool for the Photoload sampling technique
Violet J. Holley; Robert E. Keane
2010-01-01
This visual training aid is designed to provide Photoload users a tool to increase the accuracy of fuel loading estimations when using the Photoload technique. The Photoload Sampling Technique (RMRS-GTR-190) provides fire managers a sampling method for obtaining consistent, accurate, inexpensive, and quick estimates of fuel loading. It is designed to require only one...
Using Internet search engines to estimate word frequency.
Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E
2002-05-01
The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.
Wang, Ying-Fang; Tsai, Perng-Jy; Chen, Chun-Wan; Chen, Da-Ren; Dai, Yu-Tung
2011-12-30
The aims of the present study were set out to measure size distributions and estimate workers' exposure concentrations of oil mist nanoparticles in three selected workplaces of the forming, threading, and heat treating areas in a fastener manufacturing plant by using a modified electrical aerosol detector (MEAD). The results were further compared with those simultaneously obtained from a nanoparticle surface area monitor (NSAM) and a scanning mobility particle sizer (SMPS) for the validation purpose. Results show that oil mist nanoparticles in the three selected process areas were formed mainly through the evaporation and condensation processes. The measured size distributions of nanoparticles were consistently in the form of uni-modal. The estimated fraction of nanoparticles deposited on the alveolar (AV) region was consistently much higher than that on the head airway (HD) and tracheobronchial (TB) regions in both number and surface area concentration bases. However, a significant difference was found in the estimated fraction of nanoparticles deposited on each individual region while different exposure metrics were used. Comparable results were found between results obtained from both NSAM and MEAD. After normalization, no significant difference can be found between the results obtained from SMPS and MEAD. It is concluded that the obtained MEAD results are suitable for assessing oil mist nanoparticle exposures. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Badawy, Bakr; Polavarapu, Saroja; Jones, Dylan B. A.; Deng, Feng; Neish, Michael; Melton, Joe R.; Nassar, Ray; Arora, Vivek K.
2018-02-01
The Canadian Land Surface Scheme and the Canadian Terrestrial Ecosystem Model (CLASS-CTEM) together form the land surface component in the family of Canadian Earth system models (CanESMs). Here, CLASS-CTEM is coupled to Environment and Climate Change Canada (ECCC)'s weather and greenhouse gas forecast model (GEM-MACH-GHG) to consistently model atmosphere-land exchange of CO2. The coupling between the land and the atmospheric transport model ensures consistency between meteorological forcing of CO2 fluxes and CO2 transport. The procedure used to spin up carbon pools for CLASS-CTEM for multi-decadal simulations needed to be significantly altered to deal with the limited availability of consistent meteorological information from a constantly changing operational environment in the GEM-MACH-GHG model. Despite the limitations in the spin-up procedure, the simulated fluxes obtained by driving the CLASS-CTEM model with meteorological forcing from GEM-MACH-GHG were comparable to those obtained from CLASS-CTEM when it is driven with standard meteorological forcing from the Climate Research Unit (CRU) combined with reanalysis fields from the National Centers for Environmental Prediction (NCEP) to form CRU-NCEP dataset. This is due to the similarity of the two meteorological datasets in terms of temperature and radiation. However, notable discrepancies in the seasonal variation and spatial patterns of precipitation estimates, especially in the tropics, were reflected in the estimated carbon fluxes, as they significantly affected the magnitude of the vegetation productivity and, to a lesser extent, the seasonal variations in carbon fluxes. Nevertheless, the simulated fluxes based on the meteorological forcing from the GEM-MACH-GHG model are consistent to some extent with other estimates from bottom-up or top-down approaches. Indeed, when simulated fluxes obtained by driving the CLASS-CTEM model with meteorological data from the GEM-MACH-GHG model are used as prior estimates for an atmospheric CO2 inversion analysis using the adjoint of the GEOS-Chem model, the retrieved CO2 flux estimates are comparable to those obtained from other systems in terms of the global budget and the total flux estimates for the northern extratropical regions, which have good observational coverage. In data-poor regions, as expected, differences in the retrieved fluxes due to the prior fluxes become apparent. Coupling CLASS-CTEM into the Environment Canada Carbon Assimilation System (EC-CAS) is considered an important step toward understanding how meteorological uncertainties affect both CO2 flux estimates and modeled atmospheric transport. Ultimately, such an approach will provide more direct feedback to the CLASS-CTEM developers and thus help to improve the performance of CLASS-CTEM by identifying the model limitations based on atmospheric constraints.
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.
Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Marcus, S. I.
1975-01-01
The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model
Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.
2012-01-01
Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
2013-01-01
Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in administrative data vs 6.8% in survey data). The prevalence estimates for COPD from GP data were consistently higher than the corresponding estimates from the other two sources. Conclusion This study supports the use of data from Italian administrative databases to estimate geographic differences in population prevalence of ischaemic heart disease, treated diabetes, diabetes mellitus and heart failure. The algorithm for COPD used in this study requires further refinement. PMID:23297821
Samuel, Michael D.; Storm, Daniel J.; Rolley, Robert E.; Beissel, Thomas; Richards, Bryan J.; Van Deelen, Timothy R.
2014-01-01
The age structure of harvested animals provides the basis for many demographic analyses. Ages of harvested white-tailed deer (Odocoileus virginianus) and other ungulates often are estimated by evaluating replacement and wear patterns of teeth, which is subjective and error-prone. Few previous studies however, examined age- and sex-specific error rates. Counting cementum annuli of incisors is an alternative, more accurate method of estimating age, but factors that influence consistency of cementum annuli counts are poorly known. We estimated age of 1,261 adult (≥1.5 yr old) white-tailed deer harvested in Wisconsin and Illinois (USA; 2005–2008) using both wear-and-replacement and cementum annuli. We compared cementum annuli with wear-and-replacement estimates to assess misclassification rates by sex and age. Wear-and-replacement for estimating ages of white-tailed deer resulted in substantial misclassification compared with cementum annuli. Age classes of females were consistently underestimated, while those of males were underestimated for younger age classes but overestimated for older age classes. Misclassification resulted in an impression of a younger age-structure than actually was the case. Additionally, we obtained paired age-estimates from cementum annuli for 295 deer. Consistency of paired cementum annuli age-estimates decreased with age, was lower in females than males, and decreased as age estimates became less certain. Our results indicated that errors in the wear-and-replacement techniques are substantial and could impact demographic analyses that use age-structure information.
NASA Astrophysics Data System (ADS)
Hayroyan, H. S.; Hayroyan, S. H.; Karapetyan, K. A.
2018-04-01
In this paper, three types of clayish soils with different consistency and humidity properties and slip-slide resistance indexes are considered on impact of different cyclic shear stresses. The side-surface deformation charts are constructed on the basis of experimental data obtained testing cylindrical soil samples. It is shown that the fluctuation amplitude depends on time and the consistency index depends on the humidity condition in the soil inner contact and the connectivity coefficients. Consequently, each experiment is interpreted. The main result of this research is that it is necessary to make corrections in the currently active schemes of slip-hazardous slopes stability estimation, which is a crucial problem requiring ASAP solution.
NASA Technical Reports Server (NTRS)
Sim, Alex G.
1976-01-01
Longitudinal and lateral-directional estimates of the aerodynamic derivatives of the X-24B research aircraft were obtained from flight data by using a modified maximum likelihooa estimation method. Data were obtained over a Mach number range from 0.35 to 1.72 and over an angle of attack range from 3.5deg to 15.7deg. Data are presented for a subsonic and a transonic configuration. The flight derivatives were generally consistent and documented the aircraft well. The correlation between the flight data and wind-tunnel predictions is presented and discussed.
NASA Technical Reports Server (NTRS)
Sim, Alex G.
1997-01-01
Longitudinal and lateral-directional estimates of the aerodynamic derivatives of the X-24B research aircraft were obtained from flight data by using a modified maximum likelihood estimation method. Data were obtained over a Mach number range from 0.35 to 1.72 and over an angle of attack range from 3.5 deg. to 15.7 deg. Data are presented for a subsonic and transonic configuration. The flight derivatives were generally consistent and documented the aircraft well. The correlation between the flight data and wind-tunnel predictions is presented and discussed.
Estimating forest biomass and volume using airborne laser data
NASA Technical Reports Server (NTRS)
Nelson, Ross; Krabill, William; Tonelli, John
1988-01-01
An airborne pulsed laser system was used to obtain canopy height data over a southern pine forest in Georgia in order to predict ground-measured forest biomass and timber volume. Although biomass and volume estimates obtained from the laser data were variable when compared with the corresponding ground measurements site by site, the present models are found to predict mean total tree volume within 2.6 percent of the ground value, and mean biomass within 2.0 percent. The results indicate that species stratification did not consistently improve regression relationships for four southern pine species.
Efficient bootstrap estimates for tail statistics
NASA Astrophysics Data System (ADS)
Breivik, Øyvind; Aarnes, Ole Johan
2017-03-01
Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.
The color temperature of (2060) Chiron: A warm and small nucleus
NASA Technical Reports Server (NTRS)
Campins, H.; Telesco, C. M.; Osip, D. J.; Rieke, G. H.; Rieke, M. J.; Schulz, B.
1994-01-01
We present three sets of thermal-infrared observations of (2060) Chiron, obtained in 1991, 1993, and 1994. These observations allow the first estimates of the color temperature of Chiron as well as refined estimates of the radius and albedo of its nucleus. 10/20 micrometer color temperatures of 126(sub -6 sup +11) and 137(sub -9 sup +14) K are obtained from the 1993 and 1994 observations, respectively. These temperatures are consistent with the Standard Thermal Model (STM; Lebofsky & Spencer, 1989), but significantly higher than those predicted by the Isothermal Latitude Model. Our estimates of Chiron's radius based on the STM are in agreement with each other, with the observations of Lebofsky et al. (1984), and with recent occultation results (Buie et al., (1993). We obtained values for the radius of 74 +/- 11 km in 1991, 88 +/- 10 and 104 +/- 10 km in 1993, and, 94 +/- 6 and 91 +/- 13 km in 1994.
Vision-Based SLAM System for Unmanned Aerial Vehicles
Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni
2016-01-01
The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131
NASA Astrophysics Data System (ADS)
Naidu, S.; Benner, L.; Brozovic, M.; Ostro, S. J.; Nolan, M. C.; Margot, J. L.; Giorgini, J. D.; Magri, C.; Pravec, P.; Scheirich, P.; Scheeres, D. J.; Hirabayashi, M.
2016-12-01
Binary near-Earth asteroid 65803 Didymos is the target of the proposed Asteroid Impact and Deflection Assessment (AIDA) space mission. The mission consists of two spacecraft, the Demonstration for Autonomous Rendezvous Technology (DART) spacecraft that will impact the asteroid's satellite and the Asteroid Impact Mission (AIM) spacecraft that will observe the impact. We used radar observations obtained at Arecibo and Goldstone in 2003, and lightcurve data from Pravec et al. (2006) to model the shapes, sizes, and spin states of the components. The primary is top shaped and has an equatorial ridge similar to the one seen on 2000 DP107 (Naidu et al. 2015). A 300 m long flat region is also seen along the equator. The primary has an equivalent diameter of 780 m (+/- 10 %) and its extents along the principal axes are 826 m, 813 m, and 786 m (10% uncertainties). It has a spin period of 2.2600 +/- 0.0001 h. A grid search for the spin pole resulted in the best fit at ecliptic (longitude, latitude) = (296, +71) degrees (+/- 15 degrees). This estimate is consistent with the spin pole being aligned to the binary orbit normal at (310, -84) degrees. Dividing the primary mass of 5.24e11 kg (Fang & Margot 2012) by the model volume we estimate a bulk density of 2100 kg m-3 (+/- 30 %). We summed multiple radar runs to estimate the range and Doppler extents of the satellite. We estimated the motion in successive images and used a shift-and-sum technique to mitigate smearing due to translational motion. This boosted the SNRs and allowed us to obtain size and bandwidth estimates of the satellite. The visible range extent of the satellite is roughly 60-75 m at the 15 m resolution of the Arecibo images. Assuming that the true extent is twice the visible extent, we obtain a diameter estimate of 120-150 m. The bandwidth of the satellite suggests a spin period between 9-12 h that is consistent with the orbit period of 11.9 hours and with synchronous rotation.
Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.
2001-01-01
We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.
ERIC Educational Resources Information Center
Michaelides, Michalis P.
2006-01-01
Consistent behavior is a desirable characteristic that common items are expected to have when administered to different groups. Findings from the literature have established that items do not always behave in consistent ways; item indices and IRT item parameter estimates of the same items differ when obtained from different administrations.…
On Consistency Test Method of Expert Opinion in Ecological Security Assessment
Wang, Lihong
2017-01-01
To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert’s individual judgment level, ability and the consistency of the expert’s overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment. PMID:28869570
On Consistency Test Method of Expert Opinion in Ecological Security Assessment.
Gong, Zaiwu; Wang, Lihong
2017-09-04
To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.
Extension of the simulated drinking game procedure to multiple drinking games.
Cameron, Jennifer M; Leon, Matthew R; Correia, Christopher J
2011-08-01
The present study extended the Simulated Drinking Game Procedure (SDGP) to obtain information about different types of drinking games. Phase I participants (N = 545) completed online screening questionnaires assessing substance use and drinking game participation. Participants who met the selection criteria for Phase II (N = 92) participated in laboratory sessions that consisted of three different periods of drinking game play. Sixty-two percent (N = 57) of the sample was female. Data from these sessions was used to estimate the peak Blood Alcohol Concentration (BAC) a participant would achieve if they consumed alcohol while participating in the SDGP. Total consumption and estimated BAC varied as a function of game type. The total consumption and estimated BAC obtained while playing Beer Pong and Memory varied significantly as a function of group. Total ounces consumed while playing Three Man varied significantly as a function of group; however, the variation in estimated BAC obtained while playing Three Man was not significant. Results indicated that estimated BACs were higher for female participants across game type. Previous experience playing the three drinking games had no impact on total drink consumption or estimated BAC obtained while participating in the SDGP. The present study demonstrated that the SDGP can be used to generate estimates of how much alcohol is consumed and the associated obtained BAC during multiple types of drinking games. In order to fully examine whether previous experience factors in to overall alcohol consumption and BAC, future research should extend the SDGP to incorporate laboratory administration of alcohol during drinking game participation. (c) 2011 APA, all rights reserved.
Neonatal Jaundice Detection System.
Aydın, Mustafa; Hardalaç, Fırat; Ural, Berkan; Karap, Serhat
2016-07-01
Neonatal jaundice is a common condition that occurs in newborn infants in the first week of life. Today, techniques used for detection are required blood samples and other clinical testing with special equipment. The aim of this study is creating a non-invasive system to control and to detect the jaundice periodically and helping doctors for early diagnosis. In this work, first, a patient group which is consisted from jaundiced babies and a control group which is consisted from healthy babies are prepared, then between 24 and 48 h after birth, 40 jaundiced and 40 healthy newborns are chosen. Second, advanced image processing techniques are used on the images which are taken with a standard smartphone and the color calibration card. Segmentation, pixel similarity and white balancing methods are used as image processing techniques and RGB values and pixels' important information are obtained exactly. Third, during feature extraction stage, with using colormap transformations and feature calculation, comparisons are done in RGB plane between color change values and the 8-color calibration card which is specially designed. Finally, in the bilirubin level estimation stage, kNN and SVR machine learning regressions are used on the dataset which are obtained from feature extraction. At the end of the process, when the control group is based on for comparisons, jaundice is succesfully detected for 40 jaundiced infants and the success rate is 85 %. Obtained bilirubin estimation results are consisted with bilirubin results which are obtained from the standard blood test and the compliance rate is 85 %.
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
NASA Astrophysics Data System (ADS)
Song, Dawei; Ponte Castañeda, P.
2018-06-01
We make use of the recently developed iterated second-order homogenization method to obtain finite-strain constitutive models for the macroscopic response of porous polycrystals consisting of large pores randomly distributed in a fine-grained polycrystalline matrix. The porous polycrystal is modeled as a three-scale composite, where the grains are described by single-crystal viscoplasticity and the pores are assumed to be large compared to the grain size. The method makes use of a linear comparison composite (LCC) with the same substructure as the actual nonlinear composite, but whose local properties are chosen optimally via a suitably designed variational statement. In turn, the effective properties of the resulting three-scale LCC are determined by means of a sequential homogenization procedure, utilizing the self-consistent estimates for the effective behavior of the polycrystalline matrix, and the Willis estimates for the effective behavior of the porous composite. The iterated homogenization procedure allows for a more accurate characterization of the properties of the matrix by means of a finer "discretization" of the properties of the LCC to obtain improved estimates, especially at low porosities, high nonlinearties and high triaxialities. In addition, consistent homogenization estimates for the average strain rate and spin fields in the pores and grains are used to develop evolution laws for the substructural variables, including the porosity, pore shape and orientation, as well as the "crystallographic" and "morphological" textures of the underlying matrix. In Part II of this work has appeared in Song and Ponte Castañeda (2018b), the model will be used to generate estimates for both the instantaneous effective response and the evolution of the microstructure for porous FCC and HCP polycrystals under various loading conditions.
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment
NASA Technical Reports Server (NTRS)
Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.
1990-01-01
The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.
2014-01-01
Background As it becomes increasingly possible to obtain DNA sequences of orthologous genes from diverse sets of taxa, species trees are frequently being inferred from multilocus data. However, the behavior of many methods for performing this inference has remained largely unexplored. Some methods have been proven to be consistent given certain evolutionary models, whereas others rely on criteria that, although appropriate for many parameter values, have peculiar zones of the parameter space in which they fail to converge on the correct estimate as data sets increase in size. Results Here, using North American pines, we empirically evaluate the behavior of 24 strategies for species tree inference using three alternative outgroups (72 strategies total). The data consist of 120 individuals sampled in eight ingroup species from subsection Strobus and three outgroup species from subsection Gerardianae, spanning ∼47 kilobases of sequence at 121 loci. Each “strategy” for inferring species trees consists of three features: a species tree construction method, a gene tree inference method, and a choice of outgroup. We use multivariate analysis techniques such as principal components analysis and hierarchical clustering to identify tree characteristics that are robustly observed across strategies, as well as to identify groups of strategies that produce trees with similar features. We find that strategies that construct species trees using only topological information cluster together and that strategies that use additional non-topological information (e.g., branch lengths) also cluster together. Strategies that utilize more than one individual within a species to infer gene trees tend to produce estimates of species trees that contain clades present in trees estimated by other strategies. Strategies that use the minimize-deep-coalescences criterion to construct species trees tend to produce species tree estimates that contain clades that are not present in trees estimated by the Concatenation, RTC, SMRT, STAR, and STEAC methods, and that in general are more balanced than those inferred by these other strategies. Conclusions When constructing a species tree from a multilocus set of sequences, our observations provide a basis for interpreting differences in species tree estimates obtained via different approaches that have a two-stage structure in common, one step for gene tree estimation and a second step for species tree estimation. The methods explored here employ a number of distinct features of the data, and our analysis suggests that recovery of the same results from multiple methods that tend to differ in their patterns of inference can be a valuable tool for obtaining reliable estimates. PMID:24678701
DeGiorgio, Michael; Syring, John; Eckert, Andrew J; Liston, Aaron; Cronn, Richard; Neale, David B; Rosenberg, Noah A
2014-03-29
As it becomes increasingly possible to obtain DNA sequences of orthologous genes from diverse sets of taxa, species trees are frequently being inferred from multilocus data. However, the behavior of many methods for performing this inference has remained largely unexplored. Some methods have been proven to be consistent given certain evolutionary models, whereas others rely on criteria that, although appropriate for many parameter values, have peculiar zones of the parameter space in which they fail to converge on the correct estimate as data sets increase in size. Here, using North American pines, we empirically evaluate the behavior of 24 strategies for species tree inference using three alternative outgroups (72 strategies total). The data consist of 120 individuals sampled in eight ingroup species from subsection Strobus and three outgroup species from subsection Gerardianae, spanning ∼47 kilobases of sequence at 121 loci. Each "strategy" for inferring species trees consists of three features: a species tree construction method, a gene tree inference method, and a choice of outgroup. We use multivariate analysis techniques such as principal components analysis and hierarchical clustering to identify tree characteristics that are robustly observed across strategies, as well as to identify groups of strategies that produce trees with similar features. We find that strategies that construct species trees using only topological information cluster together and that strategies that use additional non-topological information (e.g., branch lengths) also cluster together. Strategies that utilize more than one individual within a species to infer gene trees tend to produce estimates of species trees that contain clades present in trees estimated by other strategies. Strategies that use the minimize-deep-coalescences criterion to construct species trees tend to produce species tree estimates that contain clades that are not present in trees estimated by the Concatenation, RTC, SMRT, STAR, and STEAC methods, and that in general are more balanced than those inferred by these other strategies. When constructing a species tree from a multilocus set of sequences, our observations provide a basis for interpreting differences in species tree estimates obtained via different approaches that have a two-stage structure in common, one step for gene tree estimation and a second step for species tree estimation. The methods explored here employ a number of distinct features of the data, and our analysis suggests that recovery of the same results from multiple methods that tend to differ in their patterns of inference can be a valuable tool for obtaining reliable estimates.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Geodetic and Astrometric Measurements with Very-Long-Baseline Interferometry. Ph.D. Thesis - MIT
NASA Technical Reports Server (NTRS)
Robertson, D. S.
1975-01-01
The use of very-long-baseline interferometry (VLBI) observations for the estimation of geodetic and astrometric parameters is discussed. Analytic models for the dependence of delay and delay rate on these parameters are developed and used for parameter estimation by the method of weighted least squares. Results are presented from approximately 15,000 delay and delay-rate observations, obtained in a series of nineteen VLBI experiments involving a total of five stations on two continents. The closure of baseline triangles is investigated and found to be consistent with the scatter of the various baseline-component results. Estimates are made of the wobble of the earth's pole and of the irregularities in the earth's rotation rate. Estimates are also made of the precession constant and of the vertical Love number, for which a value of 0.55 + or - 0.05 was obtained.
NASA Astrophysics Data System (ADS)
Wolf, C.; Johnson, A. S.; Bilicki, M.; Blake, C.; Amon, A.; Erben, T.; Glazebrook, K.; Heymans, C.; Hildebrandt, H.; Joudaki, S.; Klaes, D.; Kuijken, K.; Lidman, C.; Marin, F.; Parkinson, D.; Poole, G.
2017-04-01
We present a new training set for estimating empirical photometric redshifts of galaxies, which was created as part of the 2-degree Field Lensing Survey project. This training set is located in a ˜700 deg2 area of the Kilo-Degree-Survey South field and is randomly selected and nearly complete at r < 19.5. We investigate the photometric redshift performance obtained with ugriz photometry from VST-ATLAS and W1/W2 from WISE, based on several empirical and template methods. The best redshift errors are obtained with kernel-density estimation (KDE), as are the lowest biases, which are consistent with zero within statistical noise. The 68th percentiles of the redshift scatter for magnitude-limited samples at r < (15.5, 17.5, 19.5) are (0.014, 0.017, 0.028). In this magnitude range, there are no known ambiguities in the colour-redshift map, consistent with a small rate of redshift outliers. In the fainter regime, the KDE method produces p(z) estimates per galaxy that represent unbiased and accurate redshift frequency expectations. The p(z) sum over any subsample is consistent with the true redshift frequency plus Poisson noise. Further improvements in redshift precision at r < 20 would mostly be expected from filter sets with narrower passbands to increase the sensitivity of colours to small changes in redshift.
Baker, David R; Barron, Leon; Kasprzyk-Hordern, Barbara
2014-07-15
This paper presents, for the first time, community-wide estimation of drug and pharmaceuticals consumption in England using wastewater analysis and a large number of compounds. Among groups of compounds studied were: stimulants, hallucinogens and their metabolites, opioids, morphine derivatives, benzodiazepines, antidepressants and others. Obtained results showed the usefulness of wastewater analysis in order to provide estimates of local community drug consumption. It is noticeable that where target compounds could be compared to NHS prescription statistics, good comparisons were apparent between the two sets of data. These compounds include oxycodone, dihydrocodeine, methadone, tramadol, temazepam and diazepam. Whereas, discrepancies were observed for propoxyphene, codeine, dosulepin and venlafaxine (over-estimations in each case except codeine). Potential reasons for discrepancies include: sales of drugs sold without prescription and not included within NHS data, abuse of a drug with the compound trafficked through illegal sources, different consumption patterns in different areas, direct disposal leading to over estimations when using parent compound as the drug target residue and excretion factors not being representative of the local community. It is noticeable that using a metabolite (and not a parent drug) as a biomarker leads to higher certainty of obtained estimates. With regard to illicit drugs, consistent and logical results were reported. Monitoring of these compounds over a one week period highlighted the expected recreational use of many of these drugs (e.g. cocaine and MDMA) and the more consistent use of others (e.g. methadone). Copyright © 2014 Elsevier B.V. All rights reserved.
Planck 2015 results. XVII. Constraints on primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Arroja, F.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hamann, J.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Heavens, A.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lacasa, F.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Münchmeyer, M.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Peiris, H. V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Shiraishi, M.; Smith, K.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutter, P.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Troja, A.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
The Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators - separable template-fitting (KSW), binned, and modal - we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒlocalNL = 2.5 ± 5.7, ƒequilNL= -16 ± 70, , and ƒorthoNL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒlocalNL = 0.8 ± 5.0, ƒequilNL= -4 ± 43, and ƒorthoNL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the "look elsewhere" effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be 𝓰localNL = (-0.9 ± 7.7 ) X 104(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Arnaud, M.
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
Planck 2015 results: XVII. Constraints on primordial non-Gaussianity
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-09-20
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishida, Hideshi, E-mail: ishida@me.es.osaka-u.ac.jp
2014-06-15
In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. Thesemore » deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.« less
Nagata, Tomohisa; Mori, Koji; Aratake, Yutaka; Ide, Hiroshi; Ishida, Hiromi; Nobori, Junichiro; Kojima, Reiko; Odagami, Kiminori; Kato, Anna; Tsutsumi, Akizumi; Matsuda, Shinya
2014-01-01
The aim of the present study was to develop standardized cost estimation tools that provide information to employers about occupational safety and health (OSH) activities for effective and efficient decision making in Japanese companies. We interviewed OSH staff members including full-time professional occupational physicians to list all OSH activities. Using activity-based costing, cost data were obtained from retrospective analyses of occupational safety and health costs over a 1-year period in three manufacturing workplaces and were obtained from retrospective analyses of occupational health services costs in four manufacturing workplaces. We verified the tools additionally in four workplaces including service businesses. We created the OSH and occupational health standardized cost estimation tools. OSH costs consisted of personnel costs, expenses, outsourcing costs and investments for 15 OSH activities. The tools provided accurate, relevant information on OSH activities and occupational health services. The standardized information obtained from our OSH and occupational health cost estimation tools can be used to manage OSH costs, make comparisons of OSH costs between companies and organizations and help occupational health physicians and employers to determine the best course of action.
NASA Technical Reports Server (NTRS)
Dugan, Duane W.
1959-01-01
The possibility of obtaining useful estimates of the static longitudinal stability of aircraft flying at high supersonic Mach numbers at angles of attack between 0 and +/-180 deg is explored. Existing theories, empirical formulas, and graphical procedures are employed to estimate the normal-force and pitching-moment characteristics of an example airplane configuration consisting of an ogive-cylinder body, trapezoidal wing, and cruciform trapezoidal tail. Existing wind-tunnel data for this configuration at a Mach number of 6.86 provide an evaluation of the estimates up to an angle of attack of 35 deg. Evaluation at higher angles of attack is afforded by data obtained from wind-tunnel tests made with the same configuration at angles of attack between 30 and 150 deg at five Mach numbers between 2.5 and 3.55. Over the ranges of Mach numbers and angles of attack investigated, predictions of normal force and center-of-pressure locations for the configuration considered agree well with those obtained experimentally, particularly at the higher Mach numbers.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
Multitaper spectral analysis of atmospheric radar signals
NASA Astrophysics Data System (ADS)
Anandan, V.; Pan, C.; Rajalakshmi, T.; Ramachandra Reddy, G.
2004-11-01
Multitaper spectral analysis using sinusoidal taper has been carried out on the backscattered signals received from the troposphere and lower stratosphere by the Gadanki Mesosphere-Stratosphere-Troposphere (MST) radar under various conditions of the signal-to-noise ratio. Comparison of study is made with sinusoidal taper of the order of three and single tapers of Hanning and rectangular tapers, to understand the relative merits of processing under the scheme. Power spectra plots show that echoes are better identified in the case of multitaper estimation, especially in the region of a weak signal-to-noise ratio. Further analysis is carried out to obtain three lower order moments from three estimation techniques. The results show that multitaper analysis gives a better signal-to-noise ratio or higher detectability. The spectral analysis through multitaper and single tapers is subjected to study of consistency in measurements. Results show that the multitaper estimate is better consistent in Doppler measurements compared to single taper estimates. Doppler width measurements with different approaches were studied and the results show that the estimation was better in the multitaper technique in terms of temporal resolution and estimation accuracy.
Houser, Dorian S; Finneran, James J
2006-09-01
Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.
A new global 1-km dataset of percentage tree cover derived from remote sensing
DeFries, R.S.; Hansen, M.C.; Townshend, J.R.G.; Janetos, A.C.; Loveland, Thomas R.
2000-01-01
Accurate assessment of the spatial extent of forest cover is a crucial requirement for quantifying the sources and sinks of carbon from the terrestrial biosphere. In the more immediate context of the United Nations Framework Convention on Climate Change, implementation of the Kyoto Protocol calls for estimates of carbon stocks for a baseline year as well as for subsequent years. Data sources from country level statistics and other ground-based information are based on varying definitions of 'forest' and are consequently problematic for obtaining spatially and temporally consistent carbon stock estimates. By combining two datasets previously derived from the Advanced Very High Resolution Radiometer (AVHRR) at 1 km spatial resolution, we have generated a prototype global map depicting percentage tree cover and associated proportions of trees with different leaf longevity (evergreen and deciduous) and leaf type (broadleaf and needleleaf). The product is intended for use in terrestrial carbon cycle models, in conjunction with other spatial datasets such as climate and soil type, to obtain more consistent and reliable estimates of carbon stocks. The percentage tree cover dataset is available through the Global Land Cover Facility at the University of Maryland at http://glcf.umiacs.umd.edu.
Influence of sectioning location on age estimates from common carp dorsal spines
Watkins, Carson J.; Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.
2015-01-01
Dorsal spines have been shown to provide precise age estimates for Common CarpCyprinus carpio and are commonly used by management agencies to gain information on Common Carp populations. However, no previous studies have evaluated variation in the precision of age estimates obtained from different sectioning locations along Common Carp dorsal spines. We evaluated the precision, relative readability, and distribution of age estimates obtained from various sectioning locations along Common Carp dorsal spines. Dorsal spines from 192 Common Carp were sectioned at the base (section 1), immediately distal to the basal section (section 2), and at 25% (section 3), 50% (section 4), and 75% (section 5) of the total length of the dorsal spine. The exact agreement and within-1-year agreement among readers was highest and the coefficient of variation lowest for section 2. In general, age estimates derived from sections 2 and 3 had similar age distributions and displayed the highest concordance in age estimates with section 1. Our results indicate that sections taken at ≤ 25% of the total length of the dorsal spine can be easily interpreted and provide precise estimates of Common Carp age. The greater consistency in age estimates obtained from section 2 indicates that by using a standard sectioning location, fisheries scientists can expect age-based estimates of population metrics to be more comparable and thus more useful for understanding Common Carp population dynamics.
PHYSICAL COAL-CLEANING/FLUE GAS DESULFURIZATION COMPUTER MODEL
The model consists of four programs: (1) one, initially developed by Battell-Columbus Laboratories, obtained from Versar, Inc.; (2) one developed by TVA; and (3,4) two developed by TVA and Bechtel National, Inc. The model produces design performance criteria and estimates of capi...
... status of the civilian noninstitutionalized U.S. population. The survey consists of interviews conducted in participants' homes and standardized physical examinations in mobile examination centers. The sample design includes oversampling to obtain reliable estimates of health ...
NASA Astrophysics Data System (ADS)
Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime
2018-04-01
The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.
Student Effort and Performance over the Semester
ERIC Educational Resources Information Center
Krohn, Gregory A.; O'Connor, Catherine M.
2005-01-01
The authors extend the standard education production function and student time allocation analysis to focus on the interactions between student effort and performance over the semester. The purged instrumental variable technique is used to obtain consistent estimators of the structural parameters of the model using data from intermediate…
2010-02-01
vertical component records in a six-second window starting near the Lg detection time. Because our signal measurements are taken from the broadband...from the 2009 test. That is, comparable Love waves may have been generated by the 2006 test, but not at detectable levels. Secondary tectonic...kt., respectively. Relative yield estimates based on Lg observations from the two tests are generally consistent with the yield estimates obtained
Empirical Green's function analysis: Taking the next step
Hough, S.E.
1997-01-01
An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.
Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions
Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.
2012-01-01
In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Copula-based analysis of rhythm
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Viola, M. L. Lanfredi
2016-06-01
In this paper we establish stochastic profiles of the rhythm for three languages: English, Japanese and Spanish. We model the increase or decrease of the acoustical energy, collected into three bands coming from the acoustic signal. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination of the partitions corresponding to the three marginal processes, one for each band of energy, and the partition coming from to the multivariate Markov chain. Then, all the partitions are linked using a copula, in order to estimate the transition probabilities.
Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wildenschild, D; Berge, P A; Berryman, K G
1999-01-15
The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Multiple imputation for cure rate quantile regression with censored data.
Wu, Yuanshan; Yin, Guosheng
2017-03-01
The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.
Determination of HART I Blade Structural Properties by Laboratory Testing
NASA Technical Reports Server (NTRS)
Jung, Sung N.; Lau, Benton H.
2012-01-01
The structural properties of higher harmonic Aeroacoustic Rotor Test (HART I) blades were measured using the original set of blades tested in the German-dutch wind tunnel (DNW) in 1994. the measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. the measured properties were compared to the estimated values obtained initially from the blade manufacturer. The previously estimated blade properties showed consistently higher stiffness, up to 30 percent for the flap bending in the blade inboard root section.
Estimation of two ordered mean residual lifetime functions.
Ebrahimi, N
1993-06-01
In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers
Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.
2016-01-01
Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.
Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.
Kopka, Ryszard
2017-12-22
In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.
A new experimental correlation for non-Newtonian behavior of COOH-DWCNTs/antifreeze nanofluid
NASA Astrophysics Data System (ADS)
Izadi, Farhad; Ranjbarzadeh, Ramin; Kalbasi, Rasool; Afrand, Masoud
2018-04-01
In this paper, the rheological behavior of nano-antifreeze consisting of 50%vol. water, 50%vol. ethylene glycol and different quantities of functionalized double walled carbon nanotubes has been investigated experimentally. Initially, nano-antifreeze samples were prepared with solid volume fractions of 0.05, 0.1, 0.2, 0.4, 0.6, 0.8 and 1% using two-step method. Then, the dynamic viscosity of the nano-antifreeze samples was measured at different shear rates and temperatures. At this stage, the results showed that base fluid had the Newtonian behavior, while the behavior of all nano-antifreeze samples was non-Newtonian. Since the behavior of the samples was similar to power law model, it was attempted to find the constants of this model including consistency index and power law index. Therefore, using the measured viscosity and shear rates, consistency index and power law index were obtained by curve-fitting method. The obtained values showed that consistency index amplified with increasing volume fraction, while reduced with enhancing temperature. Besides, the obtained values for power law index were less than 1 for all samples which means shear thinning behavior. Lastly, new correlations were suggested to estimate the consistency index and power law index using curve-fitting.
WISDOM project - I. Black hole mass measurement using molecular gas kinematics in NGC 3665
NASA Astrophysics Data System (ADS)
Onishi, Kyoko; Iguchi, Satoru; Davis, Timothy A.; Bureau, Martin; Cappellari, Michele; Sarzi, Marc; Blitz, Leo
2017-07-01
As a part of the mm-Wave Interferometric Survey of Dark Object Masses (WISDOM) project, we present an estimate of the mass of the supermassive black hole (SMBH) in the nearby fast-rotator early-type galaxy NGC 3665. We obtained the Combined Array for Research in Millimeter Astronomy (CARMA) B and C array observations of the 12CO(J = 2 - 1) emission line with a combined angular resolution of 0.59 arcsec. We analysed and modelled the three-dimensional molecular gas kinematics, obtaining a best-fitting SMBH mass M_BH=5.75^{+1.49}_{-1.18} × 108 M⊙, a mass-to-light ratio at H-band (M/L)H = 1.45 ± 0.04 (M/L)⊙,H and other parameters describing the geometry of the molecular gas disc (statistical errors, all at 3σ confidence). We estimate the systematic uncertainties on the stellar M/L to be ≈0.2 (M/L)⊙,H, and on the SMBH mass to be ≈0.4 × 108 M⊙. The measured SMBH mass is consistent with that estimated from the latest correlations with galaxy properties. Following our older works, we also analysed and modelled the kinematics using only the major-axis position-velocity diagram, and conclude that the two methods are consistent.
Possibilities for Estimating Horizontal Electrical Currents in Active Regions on the Sun
NASA Astrophysics Data System (ADS)
Fursyak, Yu. A.; Abramenko, V. I.
2017-12-01
Part of the "free" magnetic energy associated with electrical current systems in the active region (AR) is released during solar flares. This proposition is widely accepted and it has stimulated interest in detecting electrical currents in active regions. The vertical component of an electric current in the photosphere can be found by observing the transverse magnetic field. At present, however, there are no direct methods for calculating transverse electric currents based on these observations. These calculations require information on the field vector measured simultaneously at several levels in the photosphere, which has not yet been done with solar instrumentation. In this paper we examine an approach to calculating the structure of the square of the density of a transverse electrical current based on a magnetogram of the vertical component of the magnetic field in the AR. Data obtained with the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO) for the AR of NOAA AR 11283 are used. It is shown that (1) the observed variations in the magnetic field of a sunspot and the proposed estimate of the density of an annular horizontal current around the spot are consistent with Faraday's law and (2) the resulting estimates of the magnitude of the square of the density of the horizontal current {j}_{\\perp}^2 = (0.002- 0.004) A2/m4 are consistent with previously obtained values of the density of a vertical current in the photosphere. Thus, the proposed estimate is physically significant and this method can be used to estimate the density and structure of transverse electrical currents in the photosphere.
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
Single point estimation of phenytoin dosing: a reappraisal.
Koup, J R; Gibaldi, M; Godolphin, W
1981-11-01
A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.
NASA Astrophysics Data System (ADS)
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Lim, Chun Yi; Law, Mary; Khetani, Mary; Rosenbaum, Peter; Pollock, Nancy
2018-08-01
To estimate the psychometric properties of a culturally adapted version of the Young Children's Participation and Environment Measure (YC-PEM) for use among Singaporean families. This is a prospective cohort study. Caregivers of 151 Singaporean children with (n = 83) and without (n = 68) developmental disabilities, between 0 and 7 years, completed the YC-PEM (Singapore) questionnaire with 3 participation scales (frequency, involvement, and change desired) and 1 environment scale for three settings: home, childcare/preschool, and community. Setting-specific estimates of internal consistency, test-retest reliability, and construct validity were obtained. Internal consistency estimates varied from .59 to .92 for the participation scales and .73 to .79 for the environment scale. Test-retest reliability estimates from the YC-PEM conducted on two occasions, 2-3 weeks apart, varied from .39 to .89 for the participation scales and from .65 to .80 for the environment scale. Moderate to large differences were found in participation and perceived environmental support between children with and without a disability. YC-PEM (Singapore) scales have adequate psychometric properties except for low internal consistency for the childcare/preschool participation frequency scale and low test-retest reliability for home participation frequency scale. The YC-PEM (Singapore) may be used for population-level studies involving young children with and without developmental disabilities.
USDA-ARS?s Scientific Manuscript database
High frequency in situ measurements of nitrate can greatly reduce the uncertainty in nitrate flux estimates. Water quality databases maintained by various federal and state agencies often consist of pollutant concentration data obtained from periodic grab samples collected from gauged reaches of a s...
Alcázar-Córcoles, Miguel Á; Verdejo-García, Antonio; Bouso-Sáiz, José C
2016-01-01
The objective of the present study was the validation and scaling of the Plutchik's Violence Risk Scale (EV) in adolescent Spanish-speaking population. For this purpose, a sample of adolescents from El Salvador, Mexico and Spain was obtained. The sample consisted of 1035 participants with a mean age of 16.2. There were 450 adolescents from forensic population (those who committed crime) and 585 adolescents from normal population (no crime committed). The internal consistency of the EV was estimated by Cronbach's alpha coefficient and with a value of 0.782. As for validity, the factorial structures found explain a large proportion of the variance (53.385%); the convergent validity was estimated by the correlation between the dimensions found, the EV and sociodemographic, criminological and personality variables. The developed scales are presented, for the first time in a cross-cultural sample, differentiating between gender and continent. Consequently, the obtained results suggest that the EV is a valid and reliable instrument within adolescent Spanish-speaking population. Furthermore, it is a quick scale, easy to apply, which is something valuable in forensic assessment.
NASA Astrophysics Data System (ADS)
Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.
2018-03-01
We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.
Michailidis, George
2014-01-01
Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g., wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network. PMID:24586224
NASA Astrophysics Data System (ADS)
Choi, W.; McKay, M.; Weber, R.; Goldstein, A. H.; Baker, B. M.; Faloona, I. C.
2009-12-01
The atmospheric boundary layer (ABL) height (zi) is an extremely important parameter for interpreting field observations of reactive trace gases and understanding air quality at the local or regional scale. Despite its importance, zi is often crudely estimated for atmospheric chemistry or air pollution studies due to limited resources and the difficulty of measuring its altitude. In this study, zi over complex terrain (a coniferous forest in the California Sierra Nevada) is estimated based on the power spectra and the integral length scale of horizontal winds obtained from a three-axis sonic anemometer during the BEARPEX (Biosphere Effects on Aerosol and Photochemistry Experiment) 2007 and 2009. Estimated zi shows very good agreement with observations which were obtained from the balloon tether sonde (2007) and radio sonde (2009) measurements under unstable conditions (z/L<0). The behavior of zi under stable conditions (z/L>0), including the evolution and breakdown of the nocturnal boundary layer over the forest is also presented. Finally, significant directional wind shear was consistently observed during 2009 with winds backing from the prevailing surface west-southwesterlies (anabatic cross-valley circulation) to consistent southerlies just above the ABL. We show that this is the result of a thermal wind driven by the potential temperature gradient aligned upslope. The resultant wind flow pattern can modify the conventional model of transport along the Sacramento urban plume and has implications for California central valley basin flushing characteristics.
Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy
2007-02-01
Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
ESR dosimetry for atomic bomb survivors and radiologic technologists
NASA Astrophysics Data System (ADS)
Tatsumi-Miyajima, Junko
1987-06-01
An individual absorbed dose for atomic bomb (A-bomb) survivors and radiologic technologists has been estimated using a new personal dosimetry. This dosimetry is based on the electron spin resonance (ESR) spectroscopy of the CO 33- radicals, which are produced in their teeth by radiation. Measurements were carried out to study the characteristics of the dosimetry; the ESR signals of the CO 33- radicals were stable and increased linearly with the radiation dose. In the evaluation of the absorbed dose, the ESR signals were considered to be a function of photon energy. The absorbed doses in ten cases of A-bomb victims and eight cases of radiologic technologists were determined. For A-bomb survivors, the adsorbed doses, which were estimated using the ESR dosimetry, were consistent with the ones obtained using the calculations of the tissue dose in air of A-bomb, and also with the ones obtained using the chromosome measurements. For radiologic technologists, the absorbed doses, which were estimated using the ESR dosimetry, agreed with the ones calculated using the information on the occupational history and conditions. The advantages of this method are that the absorbed dose can be directly estimated by measuring the ESR signals obtained from the teeth of persons, who are exposed to radiation. Therefore, the ESR dosimetry is useful to estimate the accidental exposure and the long term cumulative dose.
Using the GOCE star trackers for validating the calibration of its accelerometers
NASA Astrophysics Data System (ADS)
Visser, P. N. A. M.
2017-12-01
A method for validating the calibration parameters of the six accelerometers on board the Gravity field and steady-state Ocean Circulation Explorer (GOCE) from star tracker observations that was originally tested by an end-to-end simulation, has been updated and applied to real data from GOCE. It is shown that the method provides estimates of scale factors for all three axes of the six GOCE accelerometers that are consistent at a level significantly better than 0.01 compared to the a priori calibrated value of 1. In addition, relative accelerometer biases and drift terms were estimated consistent with values obtained by precise orbit determination, where the first GOCE accelerometer served as reference. The calibration results clearly reveal the different behavior of the sensitive and less-sensitive accelerometer axes.
The Kormendy relation of galaxies in the Frontier Fields clusters: Abell S1063 and MACS J1149.5+2223
NASA Astrophysics Data System (ADS)
Tortorelli, Luca; Mercurio, Amata; Paolillo, Maurizio; Rosati, Piero; Gargiulo, Adriana; Gobat, Raphael; Balestra, Italo; Caminha, G. B.; Annunziatella, Marianna; Grillo, Claudio; Lombardi, Marco; Nonino, Mario; Rettura, Alessandro; Sartoris, Barbara; Strazzullo, Veronica
2018-06-01
We analyse the Kormendy relations (KRs) of the two Frontier Fields clusters, Abell S1063, at z = 0.348, and MACS J1149.5+2223, at z = 0.542, exploiting very deep Hubble Space Telescope photometry and Very Large Telescope (VLT)/Multi Unit Spectroscopic Explorer (MUSE) integral field spectroscopy. With this novel data set, we are able to investigate how the KR parameters depend on the cluster galaxy sample selection and how this affects studies of galaxy evolution based on the KR. We define and compare four different galaxy samples according to (a) Sérsic indices: early-type (`ETG'), (b) visual inspection: `ellipticals', (c) colours: `red', (d) spectral properties: `passive'. The classification is performed for a complete sample of galaxies with mF814W ≤ 22.5 ABmag (M* ≳ 1010.0 M⊙). To derive robust galaxy structural parameters, we use two methods: (1) an iterative estimate of structural parameters using images of increasing size, in order to deal with closely separated galaxies and (2) different background estimations, to deal with the intracluster light contamination. The comparison between the KRs obtained from the different samples suggests that the sample selection could affect the estimate of the best-fitting KR parameters. The KR built with ETGs is fully consistent with the one obtained for ellipticals and passive. On the other hand, the KR slope built on the red sample is only marginally consistent with those obtained with the other samples. We also release the photometric catalogue with structural parameters for the galaxies included in the present analysis.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Sampling procedures for inventory of commercial volume tree species in Amazon Forest.
Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R
2017-01-01
The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Risk estimation using probability machines.
Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D
2014-03-01
Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.
NASA Astrophysics Data System (ADS)
Li, Lu; Narayanan, Ramakrishnan; Miller, Steve; Shen, Feimo; Barqawi, Al B.; Crawford, E. David; Suri, Jasjit S.
2008-02-01
Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume effect and patient motion during image scans, which are all inherent in medical ultrasound imaging. The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour obtained. The volume of prostate is estimated with the segmentation results. Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of 7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The overall system, which was developed using Microsoft Visual C++, is real-time and accurate.
An eye model for uncalibrated eye gaze estimation under variable head pose
NASA Astrophysics Data System (ADS)
Hnatow, Justin; Savakis, Andreas
2007-04-01
Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.
Perturbation analysis of queueing systems with a time-varying arrival rate
NASA Technical Reports Server (NTRS)
Cassandras, Christos G.; Pan, Jie
1991-01-01
The authors consider an M/G/1 queuing with a time-varying arrival rate. The objective is to obtain infinitesimal perturbation analysis (IPA) gradient estimates for various performance measures of interest with respect to certain system parameters. In particular, the authors consider the mean system time over n arrivals and an arrival rate alternating between two values. By choosing a convenient sample path representation of this system, they derive an unbiased IPA gradient estimator which, however, is not consistent, and investigate the nature of this problem.
ERIC Educational Resources Information Center
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Code of Federal Regulations, 2011 CFR
2011-01-01
... expenditures shall be based on consistent data for each State to which an allocation is made. (i) Media... estimated audience. This allocation of expenditures, shall be made using industry market data. If industry market data is not available, the committee shall obtain market data from the media carrier transmitting...
76 FR 77217 - Dominion Transmission, Inc.; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-12
.... The total project cost is estimated to be $16,759,375.00. A more detailed description of the project... Project which consists of 3.56 miles of new 24-inch-diameter pipeline (TL-610) as well as various... this project. First, any person wishing to obtain legal status by becoming a party to the proceedings...
Estimating the value of life and injury for pedestrians using a stated preference framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2017-09-01
The incidence of pedestrian death over the period 2010 to 2014 per 1000,000 in North Cyprus is about 2.5 times that of the EU, with 10.5 times more pedestrian road injuries than deaths. With the prospect of North Cyprus entering the EU, many investments need to be undertaken to improve road safety in order to reach EU benchmarks. We conducted a stated choice experiment to identify the preferences and tradeoffs of pedestrians in North Cyprus for improved walking times, pedestrian costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers chose. These were used to estimate the individuals' willingness to pay (WTP) to save walking time and to avoid pedestrian fatalities and injuries. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of an injury (VI) prevented, and the value per hour of walking time saved. The estimate of the VSL was €699,434 and the estimate of VI was €20,077. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. The estimated value of time to pedestrians is €7.20 per person hour. The ratio of deaths to injuries is much higher for pedestrians than for road accidents, and this is completely consistent with the higher estimated WTP to avoid a pedestrian accident than to avoid a car accident. The value of time of €7.20 is quite high relative to the wages earned. Findings provide a set of information on the VRR for fatalities and injuries and the value of pedestrian time that is critical for conducing ex ante appraisals of investments to improve pedestrian safety. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
Estimation of kinetic parameters from list-mode data using an indirect apporach
NASA Astrophysics Data System (ADS)
Ortiz, Joseph Christian
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach
Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy
2013-01-01
Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.
An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures
NASA Technical Reports Server (NTRS)
Sun, Joy Z.; Josh, Suresh M.
2009-01-01
The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.
Zee, Jarcy; Xie, Sharon X.
2015-01-01
Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510
Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.
2016-01-01
Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471
NASA Astrophysics Data System (ADS)
Choi, W.; Faloona, I. C.; McKay, M.; Goldstein, A. H.; Baker, B.
2010-11-01
In this study the atmospheric boundary layer (ABL) height (zi) over complex, forested terrain is estimated based on the power spectra and the integral length scale of horizontal winds obtained from a three-axis sonic anemometer during the BEARPEX (Biosphere Effects on Aerosol and Photochemistry) Experiment. The zi values estimated with this technique showed very good agreement with observations obtained from balloon tether sonde (2007) and rawinsonde (2009) measurements under unstable conditions (z/L < 0) at the coniferous forest in the California Sierra Nevada. The behavior of the nocturnal boundary layer height (h) and power spectra of lateral winds and temperature under stable conditions (z/L > 0) is also presented. The nocturnal boundary layer height is found to be fairly well predicted by a recent interpolation formula proposed by Zilitinkevich et al. (2007), although it was observed to only vary from 60-80 m during the experiment. Finally, significant directional wind shear was observed during both day and night with winds backing from the prevailing west-southwesterlies in the ABL (anabatic cross-valley circulation) to consistent southerlies in a layer ~1 km thick just above the ABL before veering to the prevailing westerlies further aloft. We show that this is consistent with the forcing of a thermal wind driven by the regional temperature gradient directed due east in the lower troposphere.
Martial arts striking hand peak acceleration, accuracy and consistency.
Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A
2013-01-01
The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.
Structured pedigree information for distributed fusion systems
NASA Astrophysics Data System (ADS)
Arambel, Pablo O.
2008-04-01
One of the most critical challenges in distributed data fusion is the avoidance of information double counting (also called "data incest" or "rumor propagation"). This occurs when a node in a network incorporates information into an estimate - e.g. the position of an object - and the estimate is injected into the network. Other nodes fuse this estimate with their own estimates, and continue to propagate estimates through the network. When the first node receives a fused estimate from the network, it does not know if it already contains its own contributions or not. Since the correlation between its own estimate and the estimate received from the network is not known, the node can not fuse the estimates in an optimal way. If it assumes that both estimates are independent from each other, it unknowingly double counts the information that has already being used to obtain the two estimates. This leads to overoptimistic error covariance matrices. If the double-counting is not kept under control, it may lead to serious performance degradation. Double counting can be avoided by propagating uniquely tagged raw measurements; however, that forces each node to process all the measurements and precludes the propagation of derived information. Another approach is to fuse the information using the Covariance Intersection (CI) equations, which maintain consistent estimates irrespective of the cross-correlation among estimates. However, CI does not exploit pedigree information of any kind. In this paper we present an approach that propagates multiple covariance matrices, one for each uncorrelated source in the network. This is a way to compress the pedigree information and avoids the need to propagate raw measurements. The approach uses a generalized version of the Split CI to fuse different estimates with appropriate weights to guarantee the consistency of the estimates.
Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando
2015-07-21
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.
NASA Astrophysics Data System (ADS)
Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando
2015-07-01
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.
Precise regional baseline estimation using a priori orbital information
NASA Technical Reports Server (NTRS)
Lindqwister, Ulf J.; Lichten, Stephen M.; Blewitt, Geoffrey
1990-01-01
A solution using GPS measurements acquired during the CASA Uno campaign has resulted in 3-4 mm horizontal daily baseline repeatability and 13 mm vertical repeatability for a 729 km baseline, located in North America. The agreement with VLBI is at the level of 10-20 mm for all components. The results were obtained with the GIPSY orbit determination and baseline estimation software and are based on five single-day data arcs spanning the 20, 21, 25, 26, and 27 of January, 1988. The estimation strategy included resolving the carrier phase integer ambiguities, utilizing an optial set of fixed reference stations, and constraining GPS orbit parameters by applying a priori information. A multiday GPS orbit and baseline solution has yielded similar 2-4 mm horizontal daily repeatabilities for the same baseline, consistent with the constrained single-day arc solutions. The application of weak constraints to the orbital state for single-day data arcs produces solutions which approach the precise orbits obtained with unconstrained multiday arc solutions.
Abdala, Carolina; Guérit, François; Luo, Ping; Shera, Christopher A
2014-04-01
A consistent relationship between reflection-emission delay and cochlear tuning has been demonstrated in a variety of mammalian species, as predicted by filter theory and models of otoacoustic emission (OAE) generation. As a step toward the goal of studying cochlear tuning throughout the human lifespan, this paper exploits the relationship and explores two strategies for estimating delay trends-energy weighting and peak picking-both of which emphasize data at the peaks of the magnitude fine structure. Distortion product otoacoustic emissions (DPOAEs) at 2f1-f2 were recorded, and their reflection components were extracted in 184 subjects ranging in age from prematurely born neonates to elderly adults. DPOAEs were measured from 0.5-4 kHz in all age groups and extended to 8 kHz in young adults. Delay trends were effectively estimated using either energy weighting or peak picking, with the former method yielding slightly shorter delays and the latter somewhat smaller confidence intervals. Delay and tuning estimates from young adults roughly match those obtained from SFOAEs. Although the match is imperfect, reflection-component delays showed the expected bend (apical-basal transition) near 1 kHz, consistent with a break in cochlear scaling. Consistent with other measures of tuning, the term newborn group showed the longest delays and sharpest tuning over much of the frequency range.
The SPLASH Survey: Spectroscopy of 15 M31 Dwarf Spheroidal Satellite Galaxies
NASA Astrophysics Data System (ADS)
Tollerud, Erik J.; Beaton, Rachael L.; Geha, Marla C.; Bullock, James S.; Guhathakurta, Puragra; Kalirai, Jason S.; Majewski, Steve R.; Kirby, Evan N.; Gilbert, Karoline M.; Yniguez, Basilio; Patterson, Richard J.; Ostheimer, James C.; Cooke, Jeff; Dorman, Claire E.; Choudhury, Abrar; Cooper, Michael C.
2012-06-01
We present a resolved star spectroscopic survey of 15 dwarf spheroidal (dSph) satellites of the Andromeda galaxy (M31). We filter foreground contamination from Milky Way (MW) stars, noting that MW substructure is evident in this contaminant sample. We also filter M31 halo field giant stars and identify the remainder as probable dSph members. We then use these members to determine the kinematical properties of the dSphs. For the first time, we confirm that And XVIII, XXI, and XXII show kinematics consistent with bound, dark-matter-dominated galaxies. From the velocity dispersions for the full sample of dSphs we determine masses, which we combine with the size and luminosity of the galaxies to produce mass-size-luminosity scaling relations. With these scalings we determine that the M31 dSphs are fully consistent with the MW dSphs, suggesting that the well-studied MW satellite population provides a fair sample for broader conclusions. We also estimate dark matter halo masses of the satellites and find that there is no sign that the luminosity of these galaxies depends on their dark halo mass, a result consistent with what is seen for MW dwarfs. Two of the M31 dSphs (And XV, XVI) have estimated maximum circular velocities smaller than 12 km s-1 (to 1σ), which likely places them within the lowest-mass dark matter halos known to host stars (along with Boötes I of the MW). Finally, we use the systemic velocities of the M31 satellites to estimate the mass of the M31 halo, obtaining a virial mass consistent with previous results. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
NASA Astrophysics Data System (ADS)
Nazari, Mohammad; Hancock, B. Logan; Anderson, Jonathan; Hobart, Karl D.; Feygelson, Tatyana I.; Tadjer, Marko J.; Pate, Bradford B.; Anderson, Travis J.; Piner, Edwin L.; Holtz, Mark W.
2017-10-01
Studies of diamond material for thermal management are reported for a nominally 1-μm thick layer grown on silicon. Thickness of the diamond is measured using spectroscopic ellipsometry. Spectra are consistently modeled using a diamond layer taking into account surface roughness and requiring an interlayer of nominally silicon carbide. The presence of the interlayer is confirmed by transmission electron microscopy. Thermal conductivity is determined based on a heater which is microfabricated followed by back etching to produce a supported diamond membrane. Micro-Raman mapping of the diamond phonon is used to estimate temperature rise under known drive conditions of the resistive heater. Consistent values are obtained for thermal conductivity based on straightforward analytical calculation using phonon shift to estimate temperature and finite element simulations which take both temperature rise and thermal stress into account.
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
A test and re-estimation of Taylor's empirical capacity-reserve relationship
Long, K.R.
2009-01-01
In 1977, Taylor proposed a constant elasticity model relating capacity choice in mines to reserves. A test of this model using a very large (n = 1,195) dataset confirms its validity but obtains significantly different estimated values for the model coefficients. Capacity is somewhat inelastic with respect to reserves, with an elasticity of 0.65 estimated for open-pit plus block-cave underground mines and 0.56 for all other underground mines. These new estimates should be useful for capacity determinations as scoping studies and as a starting point for feasibility studies. The results are robust over a wide range of deposit types, deposit sizes, and time, consistent with physical constraints on mine capacity that are largely independent of technology. ?? 2009 International Association for Mathematical Geology.
New learning based super-resolution: use of DWT and IGMRF prior.
Gajjar, Prakash P; Joshi, Manjunath V
2010-05-01
In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.
Contact Force Compensated Thermal Stimulators for Holistic Haptic Interfaces.
Sim, Jai Kyoung; Cho, Young-Ho
2016-05-01
We present a contact force compensated thermal stimulator that can provide a consistent tempera- ture sensation on the human skin independent of the contact force between the thermal stimulator and the skin. Previous passive thermal stimulators were not capable of providing a consistent tem- perature on the human skin even when using identical heat source voltage due to an inconsistency of the heat conduction, which changes due to the force-dependent thermal contact resistance. We propose a force-based feedback method that monitors the contact force and controls the heat source voltage according to this contact force, thus providing consistent temperature on the skin. We composed a heat circuit model equivalent to the skin heat-transfer rate as it is changed by the contact forces; we obtained the optimal voltage condition for the constant skin heat-transfer rate independent of the contact force using a numerical estimation simulation tool. Then, in the experiment, we heated real human skin at the obtained heat source voltage condition, and investigated the skin heat transfer-rate by measuring the skin temperature at various times at different levels of contact force. In the numerical estimation results, the skin heat-transfer rate for the contact forces showed a linear profile in the contact force range of 1-3 N; from this profile we obtained the voltage equation for heat source control. In the experimental study, we adjusted the heat source voltage according to the contact force based on the obtained equation. As a result, without the heat source voltage control for the contact forces, the coefficients of variation (CV) of the skin heat-transfer rate in the contact force range of 1-3 N was found to be 11.9%. On the other hand, with the heat source voltage control for the contact forces, the CV of the skin heat-transfer rate in the contact force range of 1-3 N was found to be barely 2.0%, which indicate an 83.2% improvement in consistency compared to the skin heat-transfer rate without the heat source voltage control. The present technique provides a consistent temperature sensation on the human skin independent of the body movement environment; therefore, it has high potential for use in holistic haptic interfaces that have thermal displays.
Hevesi, Joseph A.; Istok, Jonathan D.; Flint, Alan L.
1992-01-01
Values of average annual precipitation (AAP) are desired for hydrologic studies within a watershed containing Yucca Mountain, Nevada, a potential site for a high-level nuclear-waste repository. Reliable values of AAP are not yet available for most areas within this watershed because of a sparsity of precipitation measurements and the need to obtain measurements over a sufficient length of time. To estimate AAP over the entire watershed, historical precipitation data and station elevations were obtained from a network of 62 stations in southern Nevada and southeastern California. Multivariate geostatistics (cokriging) was selected as an estimation method because of a significant (p = 0.05) correlation of r = .75 between the natural log of AAP and station elevation. A sample direct variogram for the transformed variable, TAAP = ln [(AAP) 1000], was fitted with an isotropic, spherical model defined by a small nugget value of 5000, a range of 190 000 ft, and a sill value equal to the sample variance of 163 151. Elevations for 1531 additional locations were obtained from topographic maps to improve the accuracy of cokriged estimates. A sample direct variogram for elevation was fitted with an isotropic model consisting of a nugget value of 5500 and three nested transition structures: a Gaussian structure with a range of 61 000 ft, a spherical structure with a range of 70 000 ft, and a quasi-stationary, linear structure. The use of an isotropic, stationary model for elevation was considered valid within a sliding-neighborhood radius of 120 000 ft. The problem of fitting a positive-definite, nonlinear model of coregionalization to an inconsistent sample cross variogram for TAAP and elevation was solved by a modified use of the Cauchy-Schwarz inequality. A selected cross-variogram model consisted of two nested structures: a Gaussian structure with a range of 61 000 ft and a spherical structure with a range of 190 000 ft. Cross validation was used for model selection and for comparing the geostatistical model with six alternate estimation methods. Multivariate geostatistics provided the best cross-validation results.
Large Magellanic Cloud Near-infrared Synoptic Survey. IV. Leavitt Laws for Type II Cepheid Variables
NASA Astrophysics Data System (ADS)
Bhardwaj, Anupam; Macri, Lucas M.; Rejkuba, Marina; Kanbur, Shashi M.; Ngeow, Chow-Choong; Singh, Harinder P.
2017-04-01
We present time-series observations of Population II Cepheids in the Large Magellanic Cloud at near-infrared (JHK s ) wavelengths. Our sample consists of 81 variables with accurate periods and optical (VI) magnitudes from the OGLE survey, covering various subtypes of pulsators (BL Herculis, W Virginis, and RV Tauri). We generate light-curve templates using high-quality I-band data in the LMC from OGLE and K s -band data in the Galactic bulge from VISTA Variables in Via Láctea survey and use them to obtain robust mean magnitudes. We derive period-luminosity (P-L) relations in the near-infrared and Period-Wesenheit (P-W) relations by combining optical and near-infrared data. Our P-L and P-W relations are consistent with published work when excluding long-period RV Tauris. We find that Pop II Cepheids and RR Lyraes follow the same P-L relations in the LMC. Therefore, we use trigonometric parallax from the Gaia DR1 for VY Pyx and the Hubble Space Telescope parallaxes for k Pav and 5 RR Lyrae variables to obtain an absolute calibration of the Galactic K s -band P-L relation, resulting in a distance modulus to the LMC of {μ }{LMC}=18.54+/- 0.08 mag. We update the mean magnitudes of Pop II Cepheids in Galactic globular clusters using our light-curve templates and obtain distance estimates to those systems, anchored to a precise late-type eclipsing binary distance to the LMC. We find that the distances to these globular clusters based on Pop II Cepheids are consistent (within 2σ ) with estimates based on the {M}V-[{Fe}/{{H}}] relation for horizontal branch stars.
NASA Astrophysics Data System (ADS)
da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham
2017-06-01
The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.
NASA Technical Reports Server (NTRS)
Nigro, N. J.; Elkouh, A. F.
1975-01-01
The attitude of the balloon system is determined as a function of time if: (a) a method for simulating the motion of the system is available, and (b) the initial state is known. The initial state is obtained by fitting the system motion (as measured by sensors) to the corresponding output predicted by the mathematical model. In the case of the LACATE experiment the sensors consisted of three orthogonally oriented rate gyros and a magnetometer all mounted on the research platform. The initial state was obtained by fitting the angular velocity components measured with the gyros to the corresponding values obtained from the solution of the math model. A block diagram illustrating the attitude determination process employed for the LACATE experiment is shown. The process consists of three essential parts; a process for simulating the balloon system, an instrumentation system for measuring the output, and a parameter estimation process for systematically and efficiently solving the initial state. Results are presented and discussed.
Imputing Risk Tolerance From Survey Responses
Kimball, Miles S.; Sahm, Claudia R.; Shapiro, Matthew D.
2010-01-01
Economic theory assigns a central role to risk preferences. This article develops a measure of relative risk tolerance using responses to hypothetical income gambles in the Health and Retirement Study. In contrast to most survey measures that produce an ordinal metric, this article shows how to construct a cardinal proxy for the risk tolerance of each survey respondent. The article also shows how to account for measurement error in estimating this proxy and how to obtain consistent regression estimates despite the measurement error. The risk tolerance proxy is shown to explain differences in asset allocation across households. PMID:20407599
Income dynamics with a stationary double Pareto distribution.
Toda, Alexis Akira
2011-04-01
Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways--(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model--and obtain the same value of 8.4.
Doubly robust matching estimators for high dimensional confounding adjustment.
Antonelli, Joseph; Cefalu, Matthew; Palmer, Nathan; Agniel, Denis
2018-05-11
Valid estimation of treatment effects from observational data requires proper control of confounding. If the number of covariates is large relative to the number of observations, then controlling for all available covariates is infeasible. In cases where a sparsity condition holds, variable selection or penalization can reduce the dimension of the covariate space in a manner that allows for valid estimation of treatment effects. In this article, we propose matching on both the estimated propensity score and the estimated prognostic scores when the number of covariates is large relative to the number of observations. We derive asymptotic results for the matching estimator and show that it is doubly robust in the sense that only one of the two score models need be correct to obtain a consistent estimator. We show via simulation its effectiveness in controlling for confounding and highlight its potential to address nonlinear confounding. Finally, we apply the proposed procedure to analyze the effect of gender on prescription opioid use using insurance claims data. © 2018, The International Biometric Society.
Estimating maneuvers for precise relative orbit determination using GPS
NASA Astrophysics Data System (ADS)
Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs
2017-01-01
Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
The relative impact of baryons and cluster shape on weak lensing mass estimates of galaxy clusters
NASA Astrophysics Data System (ADS)
Lee, B. E.; Le Brun, A. M. C.; Haq, M. E.; Deering, N. J.; King, L. J.; Applegate, D.; McCarthy, I. G.
2018-05-01
Weak gravitational lensing depends on the integrated mass along the line of sight. Baryons contribute to the mass distribution of galaxy clusters and the resulting mass estimates from lensing analysis. We use the cosmo-OWLS suite of hydrodynamic simulations to investigate the impact of baryonic processes on the bias and scatter of weak lensing mass estimates of clusters. These estimates are obtained by fitting NFW profiles to mock data using MCMC techniques. In particular, we examine the difference in estimates between dark matter-only runs and those including various prescriptions for baryonic physics. We find no significant difference in the mass bias when baryonic physics is included, though the overall mass estimates are suppressed when feedback from AGN is included. For lowest-mass systems for which a reliable mass can be obtained (M200 ≈ 2 × 1014M⊙), we find a bias of ≈-10 per cent. The magnitude of the bias tends to decrease for higher mass clusters, consistent with no bias for the most massive clusters which have masses comparable to those found in the CLASH and HFF samples. For the lowest mass clusters, the mass bias is particularly sensitive to the fit radii and the limits placed on the concentration prior, rendering reliable mass estimates difficult. The scatter in mass estimates between the dark matter-only and the various baryonic runs is less than between different projections of individual clusters, highlighting the importance of triaxiality.
Nana, Roger; Hu, Xiaoping
2010-01-01
k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.
A Mathematical View of Water Table Fluctuations in a Shallow Aquifer in Brazil.
Neto, Dagmar C; Chang, Hung K; van Genuchten, Martinus Th
2016-01-01
Detailed monitoring of the groundwater table can provide important data about both short- and long-term aquifer processes, including information useful for estimating recharge and facilitating groundwater modeling and remediation efforts. In this paper, we presents results of 4 years (2002 to 2005) of monitoring groundwater water levels in the Rio Claro Aquifer using observation wells drilled at the Rio Claro campus of São Paulo State University in Brazil. The data were used to follow natural periodic fluctuations in the water table, specifically those resulting from earth tides and seasonal recharge cycles. Statistical analyses included methods of time-series analysis using Fourier analysis, cross-correlation, and R/S analysis. Relationships could be established between rainfall and well recovery, as well as the persistence and degree of autocorrelation of the water table variations. We further used numerical solutions of the Richards equation to obtain estimates of the recharge rate and seasonable groundwater fluctuations. Seasonable soil moisture transit times through the vadose zone obtained with the numerical solution were very close to those obtained with the cross-correlation analysis. We also employed a little-used deep drainage boundary condition to obtain estimates of seasonable water table fluctuations, which were found to be consistent with observed transient groundwater levels during the period of study. © 2015, National Ground Water Association.
Zhang, Yang; Tang, Liguo; Tian, Hua; Wang, Jiyang; Cao, Wenwu; Zhang, Zhongwu
2017-08-15
Resonant ultrasound spectroscopy (RUS) was used to determine the temperature dependence of full matrix material constants of PZT-8 piezoceramics from room temperature to 100 °C. Property variations from sample to samples can be eliminated by using only one sample, so that data self-consistency can be guaranteed. The RUS measurement system error was estimated to be lower than 2.35%. The obtained full matrix material constants at different temperatures all have excellent self-consistency, which can help accurately predict device performance at high temperatures using finite element simulations.
A Comparison of Surface Infrared with Rectal Thermometry in Dogs.
Omóbòwálé, T O; Ogunro, B N; Odigie, E A; Otuh, P I; Olugasa, B O
2017-12-30
Accurate determination of temperature is crucial in the diagnosis of febrile conditions. Although fewer techniques have proven as useful and reliable a predictor of core body temperature as the rectal thermometry, the process of obtaining the rectal temperature could be stressful in dogs. The infrared thermometry is a noncontact device used for measuring body temperature, with advantages which include speed, convenience, and reduced stress to the animals and reduced occupational risks to the animal handler. Therefore, there is the need to assess the consistency and agreement between non-contact infrared thermometry and traditional rectal thermometry in body temperature estimation. This study compared and assessed the sensitivity of non-contact infrared thermometer used on the forehead and nasal regions respectively with that of a rectal thermometer in dogs for body temperature estimation. One hundred and thirty (130) dogs presented for veterinary attention at the Veterinary Teaching Hospital (VTH), University of Ibadan, Nigeria were enrolled in this study during August to September 2014, irrespective of sex, age, breed or health status. Temperatures of dogs presented at the clinic were obtained using both multiple non-contact infrared thermometric measures obtained in the nasal and frontal head regions; and by rectal temperature. A multivariate cross-matrix analysis was used to assess the difference in measurements between the rectal thermometry and non-contact infrared thermometry. Descriptive statistics was used to compare variation and trend regularity of the nasal and fore-head infrared thermometry. A logistic regression of the difference in measurements was computed at 95% confidence interval and P<0.05. The mean difference revealed that the rectal temperature was 5.330C higher than the non-contact infrared forehead-based temperature and 7.570C higher than nasal-based temperature measurements respectively. The Bland-Altman (B-A) plot showed that the 95% limits of agreement between the frontal and nasal obtained infrared laser thermometry methods. Temperature measure obtained using non-contact infrared thermometry (forehead and nasal region of the head) was poor in consistency and agreement compared to rectal thermometry. Usefulness of non-contact forehead infrared thermometry in routine clinical practice as a close estimate of core body temperature depends on accurate calibration and therefore not recommended.
Ragagnin, Marilia Nagata; Gorman, Daniel; McCarthy, Ian Donald; Sant'Anna, Bruno Sampaio; de Castro, Cláudio Campi; Turra, Alexander
2018-01-11
Obtaining accurate and reproducible estimates of internal shell volume is a vital requirement for studies into the ecology of a range of shell-occupying organisms, including hermit crabs. Shell internal volume is usually estimated by filling the shell cavity with water or sand, however, there has been no systematic assessment of the reliability of these methods and moreover no comparison with modern alternatives, e.g., computed tomography (CT). This study undertakes the first assessment of the measurement reproducibility of three contrasting approaches across a spectrum of shell architectures and sizes. While our results suggested a certain level of variability inherent for all methods, we conclude that a single measure using sand/water is likely to be sufficient for the majority of studies. However, care must be taken as precision may decline with increasing shell size and structural complexity. CT provided less variation between repeat measures but volume estimates were consistently lower compared to sand/water and will need methodological improvements before it can be used as an alternative. CT indicated volume may be also underestimated using sand/water due to the presence of air spaces visible in filled shells scanned by CT. Lastly, we encourage authors to clearly describe how volume estimates were obtained.
NASA Astrophysics Data System (ADS)
Latypov, A. F.
2008-12-01
Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.
NASA Astrophysics Data System (ADS)
Latypov, A. F.
2009-03-01
The fuel economy was estimated at boost trajectory of aerospace plane during energy supply to the free stream. Initial and final velocities of the flight were given. A model of planning flight above cold air in infinite isobaric thermal wake was used. The comparison of fuel consumption was done at optimal trajectories. The calculations were done using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was constructed in the first part of the paper for estimating the ramjet thrust and specific impulse. To estimate the aerodynamic drag of aircraft a quadratic dependence on aerodynamic lift is used. The energy for flow heating is obtained at the sacrifice of an equivalent decrease of exergy of combustion products. The dependencies are obtained for increasing the range coefficient of cruise flight at different Mach numbers. In the second part of the paper, a mathematical model is presented for the boost part of the flight trajectory of the flying vehicle and computational results for reducing the fuel expenses at the boost trajectory at a given value of the energy supplied in front of the aircraft.
Borgen, Nicolai T
2014-11-01
This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.
Usher syndrome: definition and estimate of prevalence from two high-risk populations.
Boughman, J A; Vernon, M; Shaver, K A
1983-01-01
The Usher Syndrome (US) refers to the combined neurosensory deficits of profound hearing impairment and retinitis pigmentosa. We have obtained information on 600 cases of deaf-blindness from the registry of the Helen Keller National Center for Deaf-Blind Youths and Adults (HKNC). Of these, 54% met the diagnostic criteria of US, although only 23.8% were so diagnosed. More extensive analysis of 189 Usher clients from HKNC showed an excess of males, some variability in audiograms, and wide ophthalmologic variation. Genetic analysis of 113 sibships showed a segregation ratio consistent with recessive inheritance. The Acadian population of Louisiana has a high frequency of US which contributes significantly to the deaf population of the state. Among 48 cases from the Louisiana School for the Deaf, there was an excess of males, more variability in audiograms than expected, and an increased segregation ratio in the 26 informative sibships. Estimates of prevalence obtained using registry data and statistics from Louisiana clearly suggest that the previous estimate of 2.4 per 100,000 is too low for the United States. Recognizing problems with ascertainment, our prevalence estimate of 4.4 per 100,000 is still considered quite conservative.
NASA Astrophysics Data System (ADS)
Kristyán, Sándor
1997-11-01
In the author's previous work (Chem. Phys. Lett. 247 (1995) 101 and Chem. Phys. Lett. 256 (1996) 229) a simple quasi-linear relationship was introduced between the number of electrons, N, participating in any molecular system and the correlation energy: -0.035 ( N - 1) > Ecorr[hartree] > - 0.045( N -1). This relationship was developed to estimate more accurately correlation energy immediately in ab initio calculations by using the partial charges of atoms in the molecule, easily obtained after Hartree-Fock self-consistent field (HF-SCF) calculations. The method is compared to the well-known B3LYP, MP2, CCSD and G2M methods. Correlation energy estimations for negatively (-1) charged atomic ions are also reported.
Online sequential Monte Carlo smoother for partially observed diffusion processes
NASA Astrophysics Data System (ADS)
Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain
2018-12-01
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
ERIC Educational Resources Information Center
Kadane, Joseph B.; And Others
This paper offers a preliminary analysis of the effects of a semi-segregated school system on the IQ's of its students. The basic data consist of IQ scores for fourth, sixth, and eighth grades and associated environmental data obtained from their school records. A statistical model is developed to analyze longitudinal data when both process error…
Effective pore size and radius of capture for K(+) ions in K-channels.
Moldenhauer, Hans; Díaz-Franulic, Ignacio; González-Nilo, Fernando; Naranjo, David
2016-02-02
Reconciling protein functional data with crystal structure is arduous because rare conformations or crystallization artifacts occur. Here we present a tool to validate the dimensions of open pore structures of potassium-selective ion channels. We used freely available algorithms to calculate the molecular contour of the pore to determine the effective internal pore radius (r(E)) in several K-channel crystal structures. r(E) was operationally defined as the radius of the biggest sphere able to enter the pore from the cytosolic side. We obtained consistent r(E) estimates for MthK and Kv1.2/2.1 structures, with r(E) = 5.3-5.9 Å and r(E) = 4.5-5.2 Å, respectively. We compared these structural estimates with functional assessments of the internal mouth radii of capture (r(C)) for two electrophysiological counterparts, the large conductance calcium activated K-channel (r(C) = 2.2 Å) and the Shaker Kv-channel (r(C) = 0.8 Å), for MthK and Kv1.2/2.1 structures, respectively. Calculating the difference between r(E) and r(C), produced consistent size radii of 3.1-3.7 Å and 3.6-4.4 Å for hydrated K(+) ions. These hydrated K(+) estimates harmonize with others obtained with diverse experimental and theoretical methods. Thus, these findings validate MthK and the Kv1.2/2.1 structures as templates for open BK and Kv-channels, respectively.
Novel multireceiver communication systems configurations based on optimal estimation theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.
NASA Astrophysics Data System (ADS)
Sun, Xiaolong; Xiang, Yang; Shi, Zheming
2018-05-01
Groundwater flow models implemented to manage regional water resources require aquifer hydraulic parameters. Traditional methods for obtaining these parameters include laboratory experiments, field tests and model inversions, and each are potentially hindered by their unique limitations. Here, we propose a methodology for estimating hydraulic conductivity and storage coefficients using the spectral characteristics of the coseismic groundwater-level oscillations and seismic Rayleigh waves. The results from Well X10 are consistent with the variations and spectral characteristics of the water-level oscillations and seismic waves and present an estimated hydraulic conductivity of approximately 1 × 10-3 m s-1 and storativity of 15 × 10-6. The proposed methodology for estimating hydraulic parameters in confined aquifers is a practical and novel approach for groundwater management and seismic precursor anomaly analyses.
NASA Astrophysics Data System (ADS)
Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo
2018-04-01
The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.
Multitaper scan-free spectrum estimation using a rotational shear interferometer.
Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J
2006-05-01
Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.
Multitaper scan-free spectrum estimation using a rotational shear interferometer
NASA Astrophysics Data System (ADS)
Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.
2006-05-01
Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.
NASA Astrophysics Data System (ADS)
Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.
2017-02-01
Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.
New segregation analysis of panic disorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieland, V.J.; Fyer, A.J.; Chapman, T.
1996-04-09
We performed simple segregation analyses of panic disorder using 126 families of probands with DSM-III-R panic disorder who were ascertained for a family study of anxiety disorders at an anxiety disorders research clinic. We present parameter estimates for dominant, recessive, and arbitrary single major locus models without sex effects, as well as for a nongenetic transmission model, and compare these models to each other and to models obtained by other investigators. We rejected the nongenetic transmission model when comparing it to the recessive model. Consistent with some previous reports, we find comparable support for dominant and recessive models, and inmore » both cases estimate nonzero phenocopy rates. The effect of restricting the analysis to families of probands without any lifetime history of comorbid major depression (MDD) was also examined. No notable differences in parameter estimates were found in that subsample, although the power of that analysis was low. Consistency between the findings in our sample and in another independently collected sample suggests the possibility of pooling such samples in the future in order to achieve the necessary power for more complex analyses. 32 refs., 4 tabs.« less
Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...
2015-12-15
Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacón, Enrique, E-mail: echacon@icmm.csic.es; Tarazona, Pedro, E-mail: pedro.tarazona@uam.es; Bresme, Fernando, E-mail: f.bresme@imperial.ac.uk
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributionsmore » related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.« less
NASA Astrophysics Data System (ADS)
Martinez, M.; Rocha, B.; Li, M.; Shi, G.; Beltempo, A.; Rutledge, R.; Yanishevsky, M.
2012-11-01
The National Research Council Canada (NRC) has worked on the development of structural health monitoring (SHM) test platforms for assessing the performance of sensor systems for load monitoring applications. The first SHM platform consists of a 5.5 m cantilever aluminum beam that provides an optimal scenario for evaluating the ability of a load monitoring system to measure bending, torsion and shear loads. The second SHM platform contains an added level of structural complexity, by consisting of aluminum skins with bonded/riveted stringers, typical of an aircraft lower wing structure. These two load monitoring platforms are well characterized and documented, providing loading conditions similar to those encountered during service. In this study, a micro-electro-mechanical system (MEMS) for acquiring data from triads of gyroscopes, accelerometers and magnetometers is described. The system was used to compute changes in angles at discrete stations along the platforms. The angles obtained from the MEMS were used to compute a second, third or fourth order degree polynomial surface from which displacements at every point could be computed. The use of a new Kalman filter was evaluated for angle estimation, from which displacements in the structure were computed. The outputs of the newly developed algorithms were then compared to the displacements obtained from the linear variable displacement transducers connected to the platforms. The displacement curves were subsequently post-processed either analytically, or with the help of a finite element model of the structure, to estimate strains and loads. The estimated strains were compared with baseline strain gauge instrumentation installed on the platforms. This new approach for load monitoring was able to provide accurate estimates of applied strains and shear loads.
Gains in accuracy from averaging ratings of abnormality
NASA Astrophysics Data System (ADS)
Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.
1999-05-01
Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.
Simulation of fMRI signals to validate dynamic causal modeling estimation
NASA Astrophysics Data System (ADS)
Anandwala, Mobin; Siadat, Mohamad-Reza; Hadi, Shamil M.
2012-03-01
Through cognitive tasks certain brain areas are activated and also receive increased blood to them. This is modeled through a state system consisting of two separate parts one that deals with the neural node stimulation and the other blood response during that stimulation. The rationale behind using this state system is to validate existing analysis methods such as DCM to see what levels of noise they can handle. Using the forward Euler's method this system was approximated in a series of difference equations. What was obtained was the hemodynamic response for each brain area and this was used to test an analysis tool to estimate functional connectivity between each brain area with a given amount of noise. The importance of modeling this system is to not only have a model for neural response but also to compare to actual data obtained through functional imaging scans.
NASA Astrophysics Data System (ADS)
Lugo, J. M.; Oliva, A. I.
2017-02-01
The thermal effusivity of gold, aluminum, and copper thin films of nanometric thickness (20 nm to 200 nm) was investigated in terms of the films' thickness. The metallic thin films were deposited onto glass substrates by thermal evaporation, and the thermal effusivity was estimated by using experimental parameters such as the specific heat, thermal conductivity, and thermal diffusivity values obtained at room conditions. The specific heat, thermal conductivity, and thermal diffusivity values of the metallic thin films are determined with a methodology based on the behavior of the thermal profiles of the films when electrical pulses of few microseconds are applied at room conditions. For all the investigated materials, the thermal effusivity decreases with decreased thickness. The thermal effusivity values estimated by the presented methodology are consistent with other reported values obtained under vacuum conditions and more elaborated methodologies.
Schlund, M W
2000-10-01
Bedside hearing screenings are routinely conducted by speech and language pathologists for brain injury survivors during rehabilitation. Cognitive deficits resulting from brain injury, however, may interfere with obtaining estimates of auditory thresholds. Poor comprehension or attention deficits often compromise patient abilities to follow procedural instructions. This article describes the effects of jointly applying behavioral methods and psychophysical methods to improve two severely brain-injured survivors' attending and reporting on auditory test stimuli presentation. Treatment consisted of stimulus control training that involved differentially reinforcing responding in the presence and absence of an auditory test tone. Subsequent hearing screenings were conducted with novel auditory test tones and a common titration procedure. Results showed that prior stimulus control training improved attending and reporting such that hearing screenings were conducted and estimates of auditory thresholds were obtained.
NASA Astrophysics Data System (ADS)
Petersen, Ø. W.; Øiseth, O.; Nord, T. S.; Lourens, E.
2018-07-01
Numerical predictions of the dynamic response of complex structures are often uncertain due to uncertainties inherited from the assumed load effects. Inverse methods can estimate the true dynamic response of a structure through system inversion, combining measured acceleration data with a system model. This article presents a case study of full-field dynamic response estimation of a long-span floating bridge: the Bergøysund Bridge in Norway. This bridge is instrumented with a network of 14 triaxial accelerometers. The system model consists of 27 vibration modes with natural frequencies below 2 Hz, obtained from a tuned finite element model that takes the fluid-structure interaction with the surrounding water into account. Two methods, a joint input-state estimation algorithm and a dual Kalman filter, are applied to estimate the full-field response of the bridge. The results demonstrate that the displacements and the accelerations can be estimated at unmeasured locations with reasonable accuracy when the wave loads are the dominant source of excitation.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim
2018-01-01
A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.
Measurement of Scattering and Absorption Cross Sections of Dyed Microspheres
Gaigalas, Adolfas K; Choquette, Steven; Zhang, Yu-Zhong
2013-01-01
Measurements of absorbance and fluorescence emission were carried out on aqueous suspensions of polystyrene (PS) microspheres with a diameter of 2.5 µm using a spectrophotometer with an integrating sphere detector. The apparatus and the principles of measurements were described in our earlier publications. Microspheres with and without green BODIPY@ dye were measured. Placing the suspension inside an integrating sphere (IS) detector of the spectrophotometer yielded (after a correction for fluorescence emission) the absorbance (called A in the text) due to absorption by BODIPY@ dye inside the microsphere. An estimate of the absorbance due to scattering alone was obtained by subtracting the corrected BODIPY@ dye absorbance (A) from the measured absorbance of a suspension placed outside the IS detector (called A1 in the text). The absorption of the BODIPY@ dye inside the microsphere was analyzed using an imaginary index of refraction parameterized with three Gaussian-Lorentz functions. The Kramer-Kronig relation was used to estimate the contribution of the BODIPY@ dye to the real part of the microsphere index of refraction. The complex index of refraction, obtained from the analysis of A, was used to analyze the absorbance due to scattering ((A1- A) in the text). In practice, the analysis of the scattering absorbance, A1-A, and the absorbance, A, was carried out in an iterative manner. It was assumed that A depended primarily on the imaginary part of the microsphere index of refraction with the other parameters playing a secondary role. Therefore A was first analyzed using values of the other parameters obtained from a fit to the absorbance due to scattering, A1-A, with the imaginary part neglected. The imaginary part obtained from the analysis of A was then used to reanalyze A1-A, and obtain better estimates of the other parameters. After a few iterations, consistent estimates were obtained of the scattering and absorption cross sections in the wavelength region 300 nm to 800 nm. PMID:26401422
Results From F-18B Stability and Control Parameter Estimation Flight Tests at High Dynamic Pressures
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Noffz, Gregory K.; Iliff, Kenneth W.
2000-01-01
A maximum-likelihood output-error parameter estimation technique has been used to obtain stability and control derivatives for the NASA F-18B Systems Research Aircraft. This work has been performed to support flight testing of the active aeroelastic wing (AAW) F-18A project. The goal of this research is to obtain baseline F-18 stability and control derivatives that will form the foundation of the aerodynamic model for the AAW aircraft configuration. Flight data have been obtained at Mach numbers between 0.85 and 1.30 and at dynamic pressures ranging between 600 and 1500 lbf/sq ft. At each test condition, longitudinal and lateral-directional doublets have been performed using an automated onboard excitation system. The doublet maneuver consists of a series of single-surface inputs so that individual control-surface motions cannot be correlated with other control-surface motions. Flight test results have shown that several stability and control derivatives are significantly different than prescribed by the F-18B aerodynamic model. This report defines the parameter estimation technique used, presents stability and control derivative results, compares the results with predictions based on the current F-18B aerodynamic model, and shows improvements to the nonlinear simulation using updated derivatives from this research.
NASA Astrophysics Data System (ADS)
Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya
2011-06-01
SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a heterogeneous, highly conductive aquifer, we present some general findings that have applicability to slug testing. In particular, we find that aquifer hydraulic conductivity estimates obtained from larger slug heights tend to be lower on average (presumably due to non-linear wellbore losses) and tend to be less variable (presumably due to averaging over larger support volumes), supporting the notion that using the smallest slug heights possible to produce measurable water level changes is an important strategy when mapping aquifer heterogeneity. Finally, we present results specific to characterization of the aquifer at the Boise Hydrogeophysical Research Site. Specifically, we note that (1) K estimates obtained using a range of different slug heights give similar results, generally within ±20%; (2) correlations between estimated K profiles with depth at closely-spaced wells suggest that K values obtained from slug tests are representative of actual aquifer heterogeneity and not overly affected by near-well media disturbance (i.e., "skin"); (3) geostatistical analysis of K values obtained indicates reasonable correlation lengths for sediments of this type; and (4) overall, K values obtained do not appear to correlate well with porosity data from previous studies.
Evaluation of Rotor Structural and Aerodynamic Loads using Measured Blade Properties
NASA Technical Reports Server (NTRS)
Jung, Sung N.; You, Young-Hyun; Lau, Benton H.; Johnson, Wayne; Lim, Joon W.
2012-01-01
The structural properties of Higher harmonic Aeroacoustic Rotor Test (HART I) blades have been measured using the original set of blades tested in the wind tunnel in 1994. A comprehensive rotor dynamics analysis is performed to address the effect of the measured blade properties on airloads, blade motions, and structural loads of the rotor. The measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. The measured properties are correlated against the estimated values obtained initially by the manufacturer of the blades. The previously estimated blade properties showed consistently higher stiffnesses, up to 30% for the flap bending in the blade inboard root section. The measured offset between the center of gravity and the elastic axis is larger by about 5% chord length, as compared with the estimated value. The comprehensive rotor dynamics analysis was carried out using the measured blade property set for HART I rotor with and without HHC (Higher Harmonic Control) pitch inputs. A significant improvement on blade motions and structural loads is obtained with the measured blade properties.
Thomas, H L; Andrews, N; Green, H K; Boddington, N L; Zhao, H; Reynolds, A; McMenamin, J; Pebody, R G
2014-01-01
Methods for estimating vaccine effectiveness (VE) against severe influenza are not well established. We used the screening method to estimate VE against influenza resulting in intensive care unit (ICU) admission in England and Scotland in 2011/2012. We extracted data on confirmed influenza ICU cases from severe influenza surveillance systems, and obtained their 2011/2012 trivalent influenza vaccine (TIV) status from primary care. We compared case vaccine uptake with population vaccine uptake obtained from routine monitoring systems, adjusting for age group, specific risk group, region and week. Of 60 influenza ICU cases reported, vaccination status was available for 56 (93%). Adjusted VE against ICU admission for those aged ≥ 65 years was -10% [95% confidence interval (CI) -207 to 60], consistent with evidence of poor protection from the 2011/2012 TIV in 2011/2012. Adjusted VE for those aged <65 years in risk groups was -296% (95% CI -930 to -52), suggesting significant residual confounding using the screening method in those subject to selective vaccination.
ZASPE: A Code to Measure Stellar Atmospheric Parameters and their Covariance from Spectra
NASA Astrophysics Data System (ADS)
Brahm, Rafael; Jordán, Andrés; Hartman, Joel; Bakos, Gáspár
2017-05-01
We describe the Zonal Atmospheric Stellar Parameters Estimator (zaspe), a new algorithm, and its associated code, for determining precise stellar atmospheric parameters and their uncertainties from high-resolution echelle spectra of FGK-type stars. zaspe estimates stellar atmospheric parameters by comparing the observed spectrum against a grid of synthetic spectra only in the most sensitive spectral zones to changes in the atmospheric parameters. Realistic uncertainties in the parameters are computed from the data itself, by taking into account the systematic mismatches between the observed spectrum and the best-fitting synthetic one. The covariances between the parameters are also estimated in the process. zaspe can in principle use any pre-calculated grid of synthetic spectra, but unbiased grids are required to obtain accurate parameters. We tested the performance of two existing libraries, and we concluded that neither is suitable for computing precise atmospheric parameters. We describe a process to synthesize a new library of synthetic spectra that was found to generate consistent results when compared with parameters obtained with different methods (interferometry, asteroseismology, equivalent widths).
Chu, Khim Hoong
2017-11-09
Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6 cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Measurement properties of the WOMAC LK 3.1 pain scale.
Stratford, P W; Kennedy, D M; Woodhouse, L J; Spadoni, G F
2007-03-01
The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) is applied extensively to patients with osteoarthritis of the hip or knee. Previous work has challenged the validity of its physical function scale however an extensive evaluation of its pain scale has not been reported. Our purpose was to estimate internal consistency, factorial validity, test-retest reliability, and the standard error of measurement (SEM) of the WOMAC LK 3.1 pain scale. Four hundred and seventy-four patients with osteoarthritis of the hip or knee awaiting arthroplasty were administered the WOMAC. Estimates of internal consistency (coefficient alpha), factorial validity (confirmatory factor analysis), and the SEM based on internal consistency (SEM(IC)) were obtained. Test-retest reliability [Type 2,1 intraclass correlation coefficients (ICC)] and a corresponding SEM(TRT) were estimated on a subsample of 36 patients. Our estimates were: internal consistency alpha=0.84; SEM(IC)=1.48; Type 2,1 ICC=0.77; SEM(TRT)=1.69. Confirmatory factor analysis failed to support a single factor structure of the pain scale with uncorrelated error terms. Two comparable models provided excellent fit: (1) a model with correlated error terms between the walking and stairs items, and between night and sit items (chi2=0.18, P=0.98); (2) a two factor model with walking and stairs items loading on one factor, night and sit items loading on a second factor, and the standing item loading on both factors (chi2=0.18, P=0.98). Our examination of the factorial structure of the WOMAC pain scale failed to support a single factor and internal consistency analysis yielded a coefficient less than optimal for individual patient use. An alternate strategy to summing the five-item responses when considering individual patient application would be to interpret item responses separately or to sum only those items which display homogeneity.
NASA Astrophysics Data System (ADS)
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
NASA Technical Reports Server (NTRS)
Holden, Peter N.; Gaffey, Michael J.; Sundararaman, P.
1991-01-01
An interpretive model for estimating porphyrin concentration in bitumen and kerogen from spectral reaflectance data in the visible and near-ultraviolet region of the spectrum is derived and calibrated. Preliminary results obtained using the model are consistent with concentrations determined from the bitumen extract and suggest that 40 to 60 percent of the total porphyrin concentration remains in the kerogen after extraction of bitumen from thermally immature samples. The reflectance technique will contribute to porphyrin and kerogen studies and can be applied at its present level of development to several areas of geologic and paleo-oceanographic research.
Measurements of surface-pressure fluctuations on the XB-70 airplane at local Mach numbers up to 2.45
NASA Technical Reports Server (NTRS)
Lewis, T. L.; Dods, J. B., Jr.; Hanly, R. D.
1973-01-01
Measurements of surface-pressure fluctuations were made at two locations on the XB-70 airplane for nine flight-test conditions encompassing a local Mach number range from 0.35 to 2.45. These measurements are presented in the form of estimated power spectral densities, coherence functions, and narrow-band-convection velocities. The estimated power spectral densities compared favorably with wind-tunnel data obtained by other experimenters. The coherence function and convection velocity data supported conclusions by other experimenters that low-frequency surface-pressure fluctuations consist of small-scale turbulence components with low convection velocity.
Vatan, Sevginar; Lester, David
2008-12-01
The aim of this study was to estimate the concurrent validity of the Hopelessness, Helplessness, and Haplessness Scale developed by Lester (1998). Data were obtained from 75 psychiatric patients. Cronbach alphas ranged from .67 to .90. Scores on the scales were associated with Beck, Weissman, Lester, and Trexler's measure of hopelessness, with the correlation strongest for the new hopelessness scale.
Antares alignment gimbal positioner linear bearing tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Day, R.D.; McKay, M.D.; Pierce, D.D.
1981-01-01
The data indicate that of the six configurations tested, the solid circular rails with either the wet or dry lubricant are superior to the other configurations. Therefore, these two will undergo additional tests. These tests will consist of (1) modifying the testing procedure to obtain a better estimation of the limits of precision; and (2) subjecting the bearings to moments more closely approximating the actual conditions they will undergo on the AGP.
NASA Astrophysics Data System (ADS)
Ziegler, Yann; Lambert, Sébastien; Rosat, Séverine; Nurul Huda, Ibnu; Bizouard, Christian
2017-04-01
Nutation time series derived from very long baseline interferometry (VLBI) and time varying surface gravity data recorded by superconducting gravimeters (SG) have long been used separately to assess the Earth's interior via the estimation of the free core and inner core resonance effects on nutation or tidal gravity. The results obtained from these two techniques have been shown recently to be consistent, making relevant the combination of VLBI and SG observables and the estimation of Earth's interior parameters in a single inversion. We present here the intermediate results of the ongoing project of combining nutation and surface gravity time series to improve estimates of the Earth's core and inner core resonant frequencies. We use VLBI nutation time series spanning 1984-2016 derived by the International VLBI Service for geodesy and astrometry (IVS) as the result of a combination of inputs from various IVS analysis centers, and surface gravity data from about 15 SG stations. We address here the resonance model used for describing the Earth's interior response to tidal excitation, the data preparation consisting of the error recalibration and amplitude fitting for nutation data, and processing of SG time-varying gravity to remove any gaps, spikes, steps and other disturbances, followed by the tidal analysis with the ETERNA 3.4 software package, the preliminary estimates of the resonant periods, and the correlations between parameters.
Measurement of cochlear length using the 'A' value for cochlea basal diameter: A feasibility study.
Deep, Nicholas L; Howard, Brittany E; Holbert, Sarah O; Hoxworth, Joseph M; Barrs, David M
2017-07-01
To determine whether the cochlea basal diameter (A value) measurement can be consistently and precisely obtained from high-resolution temporal bone imaging for use in cochlear length estimation. A feasibility study at a tertiary referral center was performed using the temporal bone CTs of 40 consecutive patients. The distance from the round window to the lateral wall was measured for each cochlea by two independent reviewers, a neuroradiologist and an otolaryngologist. The interrater reliability was calculated using the intraclass correlation coefficient (ICC) and the Bland-Altman plot. Forty patients (19 males, 21 females) for a total of 80 cochleae were included. Interrater reliability on the same ear had a high level of agreement by both the ICC and the Bland-Altman plot. ICCs were 0.90 (95% CI: 0.82, 0.94) for the left ear and 0.96 (95% CI: 0.92, 0.98) for the right ear. Bland-Altman plot confirmed interrater reliability with all 96% of measurements falling within the 95% limits of agreement. Measurement between the round window and lateral cochlear wall can be consistently and reliably obtained from high-resolution temporal bone CT scans. Thus, it is feasible to utilize this method to estimate the cochlear length of patients undergoing cochlear implantation.
Analysis of long term trends of precipitation estimates acquired using radar network in Turkey
NASA Astrophysics Data System (ADS)
Tugrul Yilmaz, M.; Yucel, Ismail; Kamil Yilmaz, Koray
2016-04-01
Precipitation estimates, a vital input in many hydrological and agricultural studies, can be obtained using many different platforms (ground station-, radar-, model-, satellite-based). Satellite- and model-based estimates are spatially continuous datasets, however they lack the high resolution information many applications often require. Station-based values are actual precipitation observations, however they suffer from their nature that they are point data. These datasets may be interpolated however such end-products may have large errors over remote locations with different climate/topography/etc than the areas stations are installed. Radars have the particular advantage of having high spatial resolution information over land even though accuracy of radar-based precipitation estimates depends on the Z-R relationship, mountain blockage, target distance from the radar, spurious echoes resulting from anomalous propagation of the radar beam, bright band contamination and ground clutter. A viable method to obtain spatially and temporally high resolution consistent precipitation information is merging radar and station data to take advantage of each retrieval platform. An optimally merged product is particularly important in Turkey where complex topography exerts strong controls on the precipitation regime and in turn hampers observation efforts. There are currently 10 (additional 7 are planned) weather radars over Turkey obtaining precipitation information since 2007. This study aims to optimally merge radar precipitation data with station based observations to introduce a station-radar blended precipitation product. This study was supported by TUBITAK fund # 114Y676.
NASA Astrophysics Data System (ADS)
Yang, Yi; Wang, Tianheng; Biswal, Nrusingh C.; Wang, Xiaohong; Sanders, Melinda; Brewer, Molly; Zhu, Quing
2011-09-01
Optical scattering coefficient from ex vivo unfixed normal and malignant ovarian tissue was quantitatively extracted by fitting optical coherence tomography (OCT) A-line signals to a single scattering model. 1097 average A-line measurements at a wavelength of 1310 nm were performed at 108 sites obtained from 18 ovaries. The average scattering coefficient obtained from the normal tissue group consisted of 833 measurements from 88 sites was 2.41 mm-1 (+/-0.59), while the average coefficient obtained from the malignant tissue group consisted of 264 measurements from 20 sites was 1.55 mm-1 (+/-0.46). The malignant ovarian tissue showed significant lower scattering than the normal group (p < 0.001). The amount of collagen within OCT imaging depth was analyzed from the tissue histological section stained with Sirius Red. The average collagen area fraction (CAF) obtained from the normal tissue group was 48.4% (+/-12.3%), while the average CAF obtained from the malignant tissue group was 11.4% (+/-4.7%). A statistical significance of the collagen content was found between the two groups (p < 0.001). These results demonstrated that quantitative measurements of optical scattering coefficient from OCT images could be a potential powerful method for ovarian cancer detection.
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
A demonstration of centimeter-level monitoring of polar motion with the Global Positioning System
NASA Technical Reports Server (NTRS)
Lindqwister, U. J.; Freedman, A. P.; Blewitt, G.
1992-01-01
Daily estimates of the Earth's pole position were obtained with the Global Positioning System (GPS) by using measurements obtained during the GPS IERS (International Earth Rotation Service) and Geodynamics (GIG'91) experiment from 22 Jan. to 13 Feb. 1991. Data from a globally distributed network consisting of 21 Rogue GPS receivers were chosen for the analysis. A comparison of the GPS polar motion series with nine 24-hour very long baseline interferometry (VLBI) estimates yielded agreement in the day-to-day pole position of about 1.5 cm for both X and Y polar motion. A similar comparison of GPS and satellite laser ranging (SLR) data showed agreement to about 1.0 cm. These preliminary results indicate that polar motion can be determined by GPS independent of, and at a level comparable to, that which is obtained from either VLBI or SLR. Furthermore, GPS can provide these data with a daily frequency that neither alternative technique can readily achieve. Thus, GPS promises to be a powerful tool for determining high-frequency platform parameter variation, essential for the ultraprecise spacecraft-tracking requirements of the coming years.
Jin, Dongliang; Coasne, Benoit
2017-10-24
Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule.
Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m
NASA Astrophysics Data System (ADS)
Czarnota, Karol; Gorbatov, Alexei
2016-04-01
In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.
Gingerich, W.H.; Pityer, R.A.; Rach, J.J.
1987-01-01
1. Total blood volume and relative blood volumes in selected tissues were determined in non-anesthetized, confined rainbow trout by using 51Cr-labelled trout erythrocytes as a vascular space marker.2. Mean total blood volume was estimated to be 4.09 ± 0.55 ml/100 g, or about 75% of that estimated with the commonly used plasma space marker Evans blue dye.3. Relative tissue blood volumes were greatest in highly perfused tissues such as kidney, gills, brain and liver and least in mosaic muscle.4. Estimates of tissue vascular spaces, made using radiolabelled erythrocytes, were only 25–50% of those based on plasma space markers.5. The consistently smaller vascular volumes obtained with labelled erythrocytes could be explained by assuming that commonly used plasma space markers diffuse from the vascular compartment.
Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey
2015-12-01
The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.
Constitutive Modeling of Porcine Liver in Indentation Using 3D Ultrasound Imaging
Jordan, P.; Socrate, S.; Zickler, T.E.; Howe, R.D.
2009-01-01
In this work we present an inverse finite-element modeling framework for constitutive modeling and parameter estimation of soft tissues using full-field volumetric deformation data obtained from 3D ultrasound. The finite-element model is coupled to full-field visual measurements by regularization springs attached at nodal locations. The free ends of the springs are displaced according to the locally estimated tissue motion and the normalized potential energy stored in all springs serves as a measure of model-experiment agreement for material parameter optimization. We demonstrate good accuracy of estimated parameters and consistent convergence properties on synthetically generated data. We present constitutive model selection and parameter estimation for perfused porcine liver in indentation and demonstrate that a quasilinear viscoelastic model with shear modulus relaxation offers good model-experiment agreement in terms of indenter displacement (0.19 mm RMS error) and tissue displacement field (0.97 mm RMS error). PMID:19627823
Multi-scale modeling of irradiation effects in spallation neutron source materials
NASA Astrophysics Data System (ADS)
Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.
2011-07-01
Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.
Subtitle-Based Word Frequencies as the Best Estimate of Reading Behavior: The Case of Greek
Dimitropoulou, Maria; Duñabeitia, Jon Andoni; Avilés, Alberto; Corral, José; Carreiras, Manuel
2010-01-01
Previous evidence has shown that word frequencies calculated from corpora based on film and television subtitles can readily account for reading performance, since the language used in subtitles greatly approximates everyday language. The present study examines this issue in a society with increased exposure to subtitle reading. We compiled SUBTLEX-GR, a subtitled-based corpus consisting of more than 27 million Modern Greek words, and tested to what extent subtitle-based frequency estimates and those taken from a written corpus of Modern Greek account for the lexical decision performance of young Greek adults who are exposed to subtitle reading on a daily basis. Results showed that SUBTLEX-GR frequency estimates effectively accounted for participants’ reading performance in two different visual word recognition experiments. More importantly, different analyses showed that frequencies estimated from a subtitle corpus explained the obtained results significantly better than traditional frequencies derived from written corpora. PMID:21833273
NASA Technical Reports Server (NTRS)
Conel, James E.
1990-01-01
Groound-reflectance data on selected targets for calbiration of a Landsat TM image of Wind River Basin, Wyoming, acquired November 21, 1982 were examined. Field-derived calibration relationships together with Landsat radiometric calibration data are used to convert scanner DN values to spectral radiance for the TM bands and (together with a simplified homogeneous atmospheric model) to obtain estimates of single-scattering albedo and optical depth consistent with the derived path radiance and transmission properties of the atmosphere. These estimates are used to study the problems of evaluation of the magnitude of adjacency effects for reference targets, the assumption of isotropic properties, and the aggregate magnitude of multiple reflections between sky and ground. The radiance calibration equations are also used together with preflight measured signal/noise properties of the TM-4 system to estimate the noise-equivalent reflectance recoverable in practice from the system.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Boutin, Claude; Geindreau, Christian
2010-09-01
This paper presents a study of transport parameters (diffusion, dynamic permeability, thermal permeability, trapping constant) of porous media by combining the homogenization of periodic media (HPM) and the self-consistent scheme (SCM) based on a bicomposite spherical pattern. The link between the HPM and SCM approaches is first established by using a systematic argument independent of the problem under consideration. It is shown that the periodicity condition can be replaced by zero flux and energy through the whole surface of the representative elementary volume. Consequently the SCM solution can be considered as a geometrical approximation of the local problem derived through HPM for materials such that the morphology of the period is "close" to the SCM pattern. These results are then applied to derive the estimates of the effective diffusion, the dynamic permeability, the thermal permeability and the trapping constant of porous media. These SCM estimates are compared with numerical HPM results obtained on periodic arrays of spheres and polyhedrons. It is shown that SCM estimates provide good analytical approximations of the effective parameters for periodic packings of spheres at porosities larger than 0.6, while the agreement is excellent for periodic packings of polyhedrons in the whole range of porosity.
Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun
2014-01-01
Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes
NASA Astrophysics Data System (ADS)
Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac
2012-11-01
In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.
Phase diagram and universality of the Lennard-Jones gas-liquid system.
Watanabe, Hiroshi; Ito, Nobuyasu; Hu, Chin-Kun
2012-05-28
The gas-liquid phase transition of the three-dimensional Lennard-Jones particles system is studied by molecular dynamics simulations. The gas and liquid densities in the coexisting state are determined with high accuracy. The critical point is determined by the block density analysis of the Binder parameter with the aid of the law of rectilinear diameter. From the critical behavior of the gas-liquid coexisting density, the critical exponent of the order parameter is estimated to be β = 0.3285(7). Surface tension is estimated from interface broadening behavior due to capillary waves. From the critical behavior of the surface tension, the critical exponent of the correlation length is estimated to be ν = 0.63(4). The obtained values of β and ν are consistent with those of the Ising universality class.
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Axial-vector form factors of the nucleon from lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Rajan; Jang, Yong-Chull; Lin, Huey-Wen
In this paper, we present results for the form factors of the isovector axial vector current in the nucleon state using large scale simulations of lattice QCD. The calculations were done using eight ensembles of gauge configurations generated by the MILC collaboration using the HISQ action with 2 + 1 + 1 dynamical flavors. These ensembles span three lattice spacings a ≈ 0.06 , 0.09, and 0.12 fm and light-quark masses corresponding to the pion masses M π ≈ 135, 225, and 310 MeV. High-statistics estimates allow us to quantify systematic uncertainties in the extraction of G A (Q 2)more » and the induced pseudoscalar form factor G P(Q 2) . We perform a simultaneous extrapolation in the lattice spacing, lattice volume and light-quark masses of the axial charge radius r A data to obtain physical estimates. Using the dipole ansatz to fit the Q 2 behavior we obtain r A | dipole = 0.49(3) fm , which corresponds to M A = 1.39(9) GeV , and is consistent with M A = 1.35(17) GeV obtained by the miniBooNE collaboration. The estimate obtained using the z -expansion is r A | z - expansion = 0.46(6) fm, and the combined result is r A | combined = 0.48(4) fm. Analysis of the induced pseudoscalar form factor G P (Q 2) yields low estimates for g* P and g πNN compared to their phenomenological values. To understand these, we analyze the partially conserved axial current (PCAC) relation by also calculating the pseudoscalar form factor. Lastly, we find that these low values are due to large deviations in the PCAC relation between the three form factors, and in the pion-pole dominance hypothesis.« less
Axial-vector form factors of the nucleon from lattice QCD
Gupta, Rajan; Jang, Yong-Chull; Lin, Huey-Wen; ...
2017-12-04
In this paper, we present results for the form factors of the isovector axial vector current in the nucleon state using large scale simulations of lattice QCD. The calculations were done using eight ensembles of gauge configurations generated by the MILC collaboration using the HISQ action with 2 + 1 + 1 dynamical flavors. These ensembles span three lattice spacings a ≈ 0.06 , 0.09, and 0.12 fm and light-quark masses corresponding to the pion masses M π ≈ 135, 225, and 310 MeV. High-statistics estimates allow us to quantify systematic uncertainties in the extraction of G A (Q 2)more » and the induced pseudoscalar form factor G P(Q 2) . We perform a simultaneous extrapolation in the lattice spacing, lattice volume and light-quark masses of the axial charge radius r A data to obtain physical estimates. Using the dipole ansatz to fit the Q 2 behavior we obtain r A | dipole = 0.49(3) fm , which corresponds to M A = 1.39(9) GeV , and is consistent with M A = 1.35(17) GeV obtained by the miniBooNE collaboration. The estimate obtained using the z -expansion is r A | z - expansion = 0.46(6) fm, and the combined result is r A | combined = 0.48(4) fm. Analysis of the induced pseudoscalar form factor G P (Q 2) yields low estimates for g* P and g πNN compared to their phenomenological values. To understand these, we analyze the partially conserved axial current (PCAC) relation by also calculating the pseudoscalar form factor. Lastly, we find that these low values are due to large deviations in the PCAC relation between the three form factors, and in the pion-pole dominance hypothesis.« less
NASA Astrophysics Data System (ADS)
Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng
2017-12-01
Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.
Grytsenko, Konstantin; Lozovski, Valeri; Strilchuk, Galyna; Schrader, Sigurd
2012-11-07
Nanocomposite films consisting of gold inclusions in the polytetrafluoroethylene (PTFE) matrix were obtained by thermal vacuum deposition. Annealing of the obtained films with different temperatures was used to measure varying of film morphologies. The dependence of optical properties of the films on their morphology was studied. It was established that absorption and profile of the nanocomposite film obtained by thermal vacuum deposition can be changed with annealing owing to the fact that different annealing temperatures lead to different average particle sizes. A method to calculate the optical properties of nanocomposite thin films with inclusions of different sizes was proposed. Thus, comparison of experimental optical spectra with the spectra obtained during the simulation enables estimating average sizes of inclusions. The calculations give the possibility of understanding morphological changes in the structures.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.
Boersen, Mark R.; Clark, Joseph D.; King, Tim L.
2003-01-01
The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.
NASA Technical Reports Server (NTRS)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
NASA Astrophysics Data System (ADS)
Moret-Fernández, David; Angulo, Marta; Latorre, Borja; González-Cebollada, César; López, María Victoria
2017-04-01
Determination of the saturated hydraulic conductivity, Ks, and the α and n parameters of the van Genuchten (1980) water retention curve, θ(h), are fundamental to fully understand and predict soil water distribution. This work presents a new procedure to estimate the soil hydraulic properties from the inverse analysis of a single cumulative upward infiltration curve followed by an overpressure step at the end of the wetting process. Firstly, Ks is calculated by the Darcy's law from the overpressure step. The soil sorptivity (S) is then estimated using the Haverkamp et al., (1994) equation. Next, a relationship between α and n, f(α,n), is calculated from the estimated Sand Ks. The α and n values are finally obtained by the inverse analysis of the experimental data after applying the f(α,n) relationship to the HYDRUS-1D model. The method was validated on theoretical synthetic curves for three different soils (sand, loam and clay), and subsequently tested on experimental sieved soils (sand, loam, clay loam and clay) of known hydraulic properties. A robust relationship was observed between the theoretical α and nvalues (R2 > 0.99) of the different synthetic soils and those estimated from inverse analysis of the upward infiltration curve. Consistent results were also obtained for the experimental soils (R2 > 0.85). These results demonstrated that this technique allowed accurate estimates of the soil hydraulic properties for a wide range of textures, including clay soils.
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.
A contact-force regulated photoplethysmography (PPG) platform
NASA Astrophysics Data System (ADS)
Sim, Jai Kyoung; Ahn, Bongyoung; Doh, Il
2018-04-01
A photoplethysmography (PPG) platform integrated with a miniaturized force-regulator is proposed in this study. Because a thermo-pneumatic type regulator maintains a consistent contact-force between the PPG probe and the measuring site, a consistent and stable PPG signal can be obtained. We designed and fabricated a watch-type PPG platform with an overall size of 35 mm × 19 mm. In the PPG measurement on the radial artery wrist while posture of the wrist is changed to extension, neutral, or flexion, regulation of the contact-force provides consistent PPG measurements for which the variations in the PPG amplitude (PPGA) was 7.2 %. The proposed PPG platform can be applied to biosignal measurements in various fields such as PPG-based ANS monitoring to estimate nociception, sleep apnea syndrome, and psychological stress.
Effective pore size and radius of capture for K+ ions in K-channels
Moldenhauer, Hans; Díaz-Franulic, Ignacio; González-Nilo, Fernando; Naranjo, David
2016-01-01
Reconciling protein functional data with crystal structure is arduous because rare conformations or crystallization artifacts occur. Here we present a tool to validate the dimensions of open pore structures of potassium-selective ion channels. We used freely available algorithms to calculate the molecular contour of the pore to determine the effective internal pore radius (rE) in several K-channel crystal structures. rE was operationally defined as the radius of the biggest sphere able to enter the pore from the cytosolic side. We obtained consistent rE estimates for MthK and Kv1.2/2.1 structures, with rE = 5.3–5.9 Å and rE = 4.5–5.2 Å, respectively. We compared these structural estimates with functional assessments of the internal mouth radii of capture (rC) for two electrophysiological counterparts, the large conductance calcium activated K-channel (rC = 2.2 Å) and the Shaker Kv-channel (rC = 0.8 Å), for MthK and Kv1.2/2.1 structures, respectively. Calculating the difference between rE and rC, produced consistent size radii of 3.1–3.7 Å and 3.6–4.4 Å for hydrated K+ ions. These hydrated K+ estimates harmonize with others obtained with diverse experimental and theoretical methods. Thus, these findings validate MthK and the Kv1.2/2.1 structures as templates for open BK and Kv-channels, respectively. PMID:26831782
NASA Astrophysics Data System (ADS)
Garcin, Matthieu
2017-10-01
Hurst exponents depict the long memory of a time series. For human-dependent phenomena, as in finance, this feature may vary in the time. It justifies modelling dynamics by multifractional Brownian motions, which are consistent with time-dependent Hurst exponents. We improve the existing literature on estimating time-dependent Hurst exponents by proposing a smooth estimate obtained by variational calculus. This method is very general and not restricted to the sole Hurst framework. It is globally more accurate and easier than other existing non-parametric estimation techniques. Besides, in the field of Hurst exponents, it makes it possible to make forecasts based on the estimated multifractional Brownian motion. The application to high-frequency foreign exchange markets (GBP, CHF, SEK, USD, CAD, AUD, JPY, CNY and SGD, all against EUR) shows significantly good forecasts. When the Hurst exponent is higher than 0.5, what depicts a long-memory feature, the accuracy is higher.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision.
Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz
2016-12-06
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.
Isonymy structure of four Venezuelan states.
Rodríguez-Larralde, A; Barrai, I; Alfonzo, J C
1993-01-01
The isonymy structure of four Venezuelan States-Falcón, Mérida, Nueva Esparta and Yaracuy-was studied using the surnames of the Venezuelan register of electors updated in 1984. The surname distributions of 155 counties were obtained and, for each county, estimates of consanguinity due to random isonymy and Fisher's alpha were calculated. It was shown that for large sample sizes the inverse of Fisher's alpha is identical to the unbiased estimate of within-population random isonymy. A three-dimensional isometric surface plot was obtained for each State, based on the counties' random isonymy estimates. The highest estimates of random consanguinity were found in the States of Nueva Esparta and Mérida, while the lowest were found in Yaracuy. Other microdifferentiation indicators from the same data gave similar results, and an interpretation was attempted, based on the particular economic and geographic characteristics of each State. Four different genetic distances between all possible pairs of counties were calculated within States; geographic distance shows the highest correlations with random isonymy and Euclidean distance, with the exception of the State of Nueva Esparta, where there is no correlation between geographic distance and random isonymy. It was possible to group counties in clusters, from dendrograms based on Euclidean distance. Isonymy clustering was also consistent with socioeconomic and geographic characteristics of the counties.
GPS deformation rates in the Bajo Segura Basin (NE of the Eastern Betic Shear Zone, SE Spain)
NASA Astrophysics Data System (ADS)
Jesús Borque, María; Sánchez-Alzola, Alberto; Estévez, Antonio; García-Tortosa, Francisco J.; Martín-Rojas, Iván; Molina, Sergio; Alfaro, Pedro; Rodríguez-Caderot, Gracia; de Lacy, Clara; García-Armenteros, Juan Antonio; Avilés, Manuel; Herrera, Antonio; Rosa-Cintas, Sergio; Gil, Antonio J.
2014-05-01
The Bajo Segura Basin, located in the NE end of the Eastern Betic Shear Zone, is one of the areas with highest seismic activity of the Iberian Peninsula. It is bounded by the Crevillente Fault to the north and the Bajo Segura Fault to the south, and it is characterized by a Late Miocene to Quaternary folded cover. We estimate the present-day deformation of the study area from a GPS network with 11 sites. Observation campaigns were carried out four times (June 1999, September 2001, September 2002 and September 2013). We used the 6.2 version of GIPSY-OASIS software to process GPS data in Precise Point Positioning mode (PPP). In order to obtain the position time series in the whole period of these episodic campaigns, all the GPS observations from 1999 to 2013 campaigns were processed with an identical standard procedure. We compared our velocity field estimation with respect to GEODVEL tectonic model to obtain the residual velocity field of the Bajo Segura Basin. We estimated a ~N-S shortening with deformation rates varying between 0.2 and 0.6 mm/yr. These results are consistent with local geological deformation rates although slightly higher. They also fit well with regional geodetic data estimated for the Western Mediterranean.
Two-loop hard-thermal-loop thermodynamics with quarks
NASA Astrophysics Data System (ADS)
Andersen, Jens O.; Petitgirard, Emmanuel; Strickland, Michael
2004-08-01
We calculate the quark contribution to the free energy of a hot quark-gluon plasma to two-loop order using hard-thermal-loop (HTL) perturbation theory. All ultraviolet divergences can be absorbed into renormalizations of the vacuum energy and the HTL quark and gluon mass parameters. The quark and gluon HTL mass parameters are determined self-consistently by a variational prescription. Combining the quark contribution with the two-loop HTL perturbation theory free energy for pure glue we obtain the total two-loop QCD free energy. Comparisons are made with lattice estimates of the free energy for Nf=2 and with exact numerical results obtained in the large-Nf limit.
On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos
2015-01-01
For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489
Antarctic meteor observations using the Davis MST and meteor radars
NASA Astrophysics Data System (ADS)
Holdsworth, David A.; Murphy, Damian J.; Reid, Iain M.; Morris, Ray J.
2008-07-01
This paper presents the meteor observations obtained using two radars installed at Davis (68.6°S, 78.0°E), Antarctica. The Davis MST radar was installed primarily for observation of polar mesosphere summer echoes, with additional transmit and receive antennas installed to allow all-sky interferometric meteor radar observations. The Davis meteor radar performs dedicated all-sky interferometric meteor radar observations. The annual count rate variation for both radars peaks in mid-summer and minimizes in early Spring. The height distribution shows significant annual variation, with minimum (maximum) peak heights and maximum (minimum) height widths in early Spring (mid-summer). Although the meteor radar count rate and height distribution variations are consistent with a similar frequency meteor radar operating at Andenes (69.3°N), the peak heights show a much larger variation than at Andenes, while the count rate maximum-to-minimum ratios show a much smaller variation. Investigation of the effects of the temporal sampling parameters suggests that these differences are consistent with the different temporal sampling strategies used by the Davis and Andenes meteor radars. The new radiant mapping procedure of [Jones, J., Jones, W., Meteor radiant activity mapping using single-station radar observations, Mon. Not. R. Astron. Soc., 367(3), 1050-1056, doi: 10.1111/j.1365-2966.2006.10025.x, 2006] is investigated. The technique is used to detect the Southern delta-Aquarid meteor shower, and a previously unknown weak shower. Meteoroid speeds obtained using the Fresnel transform are presented. The diurnal, annual, and height variation of meteoroid speeds are presented, with the results found to be consistent with those obtained using specular meteor radars. Meteoroid speed estimates for echoes identified as Southern delta-Aquarid and Sextantid meteor candidates show good agreement with the theoretical pre-atmospheric speeds of these showers (41 km s -1 and 32 km s -1, respectively). The meteoroid speeds estimated for these showers show decreasing speed with decreasing height, consistent with the effects of meteoroid deceleration. Finally, we illustrate how the new radiant mapping and meteoroid speed techniques can be combined for unambiguous meteor shower detection, and use these techniques to detect a previously unknown weak shower.
Erman, A; Sathya, A; Nam, A; Bielecki, J M; Feld, J J; Thein, H-H; Wong, W W L; Grootendorst, P; Krahn, M D
2018-05-01
Chronic hepatitis C (CHC) is a leading cause of hepatic fibrosis and cirrhosis. The level of fibrosis is traditionally established by histology, and prognosis is estimated using fibrosis progression rates (FPRs; annual probability of progressing across histological stages). However, newer noninvasive alternatives are quickly replacing biopsy. One alternative, transient elastography (TE), quantifies fibrosis by measuring liver stiffness (LSM). Given these developments, the purpose of this study was (i) to estimate prognosis in treatment-naïve CHC patients using TE-based liver stiffness progression rates (LSPR) as an alternative to FPRs and (ii) to compare consistency between LSPRs and FPRs. A systematic literature search was performed using multiple databases (January 1990 to February 2016). LSPRs were calculated using either a direct method (given the difference in serial LSMs and time elapsed) or an indirect method given a single LSM and the estimated duration of infection and pooled using random-effects meta-analyses. For validation purposes, FPRs were also estimated. Heterogeneity was explored by random-effects meta-regression. Twenty-seven studies reporting on 39 groups of patients (N = 5874) were identified with 35 groups allowing for indirect and 8 for direct estimation of LSPR. The majority (~58%) of patients were HIV/HCV-coinfected. The estimated time-to-cirrhosis based on TE vs biopsy was 39 and 38 years, respectively. In univariate meta-regressions, male sex and HIV were positively and age at assessment, negatively associated with LSPRs. Noninvasive prognosis of HCV is consistent with FPRs in predicting time-to-cirrhosis, but more longitudinal studies of liver stiffness are needed to obtain refined estimates. © 2017 John Wiley & Sons Ltd.
Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.
Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L
2018-06-03
In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.
Ranger, R; Butler, P; Yahnke, C; Valentino, D
2012-06-01
To develop and validate an Optically Stimulated Luminescent (OSL) dosimeter for exposure control verification of x-ray projection mammography imaging systems. The active detection element of the dosimeter is a strip of OSL material 3.0 mm wide, 0.13 mm thick and 30.0 mm long with an overlying aluminum step wedge with thicknesses of 0, 0.2, 0.4 and 0.6 mm Al, encapsulated in a light-tight plastic enclosure with outer dimensions of 10.0 mm wide, 5.4 mm thick, and 54.0 mm long. The dosimeter is used in conjunction with a breast phantom for the purpose of estimating the half-value layer (HVL), entrance surface exposure (ESE), and average glandular dose (AGD) in conventional projection mammography. ESE and HVL were computed based on analysis of exposure profiles obtained from exposed strip dosimeters. The AGD was estimated by multiplying the ESE by the appropriate exposure to dose conversion factor for the thickness and % glandular tissue fraction represented by the phantom and target-filter combination employed. The accuracy and reproducibility of the ESE, HVL and AGD estimates obtained using the dosimeter positioned on the surface of the ACR phantom at the chest wall edge, was evaluated using mammography systems utilizing different imaging receptor technology, i.e. screen-film (SF), computed radiography (CR) and direct radiography (DR) and compared against results obtained using a calibrated ion chamber fitted with a mammography probe. ESE, AGD and HVL results obtained using the OSL mammography QA dosimeter agreed with results obtained using an ion chamber to within 5-10%, depending on the target-filter combination used. Repeat readings were highly consistent with a coefficient of variation = 5%. The OSL mammography QA dosimeter has been shown to effectively estimate ESE, HVL and AGD, demonstrating its usefulness for secondary monitoring of output exposure of mammography imaging systems. © 2012 American Association of Physicists in Medicine.
Rule-Based Flight Software Cost Estimation
NASA Technical Reports Server (NTRS)
Stukes, Sherry A.; Spagnuolo, John N. Jr.
2015-01-01
This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.
NASA Astrophysics Data System (ADS)
Delia García, Rosa; Cuevas, Emilio; García, Omaira Elena; Ramos, Ramón; Romero-Campos, Pedro Miguel; de Ory, Fernado; Cachorro, Victoria Eugenia; de Frutos, Angel
2017-03-01
A 1-year inter-comparison of classical and modern radiation and sunshine duration (SD) instruments has been performed at Izaña Atmospheric Observatory (IZO) located in Tenerife (Canary Islands, Spain) starting on 17 July 2014. We compare daily global solar radiation (GSRH) records measured with a Kipp & Zonen CM-21 pyranometer, taken in the framework of the Baseline Surface Radiation Network, with those measured with a multifilter rotating shadowband radiometer (MFRSR), a bimetallic pyranometer (PYR) and GSRH estimated from sunshine duration performed by a Campbell-Stokes sunshine recorder (CS) and a Kipp & Zonen sunshine duration sensor (CSD). Given that the BSRN GSRH records passed strict quality controls (based on principles of physical limits and comparison with the LibRadtran model), they have been used as reference in the inter-comparison study. We obtain an overall root mean square error (RMSE) of ˜ 0.9 MJm-2 (4 %) for PYR and MFRSR GSRH, 1.9 (7 %) and 1.2 MJm-2 (5 %) for CS and CSD GSRH, respectively. Factors such as temperature, relative humidity (RH) and the solar zenith angle (SZA) have been shown to moderately affect the GSRH observations. As an application of the methodology developed in this work, we have re-evaluated the GSRH data time series obtained at IZO with two PYRs between 1977 and 1991. Their high consistency and temporal stability have been proved by comparing with GSRH estimates obtained from SD observations. These results demonstrate that (1) the continuous-basis inter-comparison of different GSRH techniques offers important diagnostics for identifying inconsistencies between GSRH data records, and (2) the GSRH measurements performed with classical and more simple instruments are consistent with more modern techniques and, thus, valid to recover GSRH data time series and complete worldwide distributed GSRH data. The inter-comparison and quality assessment of these different techniques have allowed us to obtain a complete and consistent long-term global solar radiation series (1977-2015) at Izaña.
Copula-based prediction of economic movements
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Hirsh, I. D.
2016-06-01
In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.
NASA Astrophysics Data System (ADS)
Flores-Estrella, H.; Aguirre, J.; Boore, D.; Yussim, S.
2001-12-01
Microtremor recordings have become a useful tool for microzonation studies in countries with low to moderate seismicity and also in countries where there are few seismographs or the recurrence time for an earthquake is quite long. Microtremor recordings can be made at almost any time and any place without needing to wait for an earthquake. The measurements can be made using one station or an array of stations. Microtremor recordings can be used to estimate site response directly (e.g. by using Nakamura's technique), or they can be used to estimate shear-wave velocities, from which site response can be calculated. A number of studies have found that the direct estimation of site response may be unreliable, except for identifying the fundamental resonant period of a site. Obtaining shear-wave velocities requires inverting measurements of Rayleigh wave phase velocities from microtremors, which are obtained by using the Spatial Autocorrelation (SPAC) (Aki, 1957) or the Frequency-Wave Number (F-K) (Horike, 1985) methods. Estimating shear-wave velocities from microtremor recordings is a cheaper alternative than direct methods, such as the logging of boreholes. In this work we use simultaneous microtremor recordings from triangular arrays located at two sites in Mexico City, Mexico, one ("Texcoco") with a lacustrine sediment layer of about 200 m depth, and the other one ("Ciudad Universitaria") underlain by 2,000 year old basaltic flows from Xitle volcano. The data are analyzed using both the SPAC method and by the standard F-K method. The results obtained from the SPAC method are more consistent with expectations from the geological conditions and an empirical transfer function (Montalvo et al., 2001) than those from F-K method. We also analyze data from the Hollister Municipal Airport in California. The triangular array at this site is located near a borehole from which seismic velocities have been obtained using a downhole logging method (Liu et al., 2000). We compare results from the microtremor recordings analyzed using both the SPAC and F-K methods with those obtained from the downhole logging.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
Maina, Isabella; Wanjala, Pepela; Soti, David; Kipruto, Hillary; Droti, Benson; Boerma, Ties
2017-10-01
To develop a systematic approach to obtain the best possible national and subnational statistics for maternal and child health coverage indicators from routine health-facility data. Our approach aimed to obtain improved numerators and denominators for calculating coverage at the subnational level from health-facility data. This involved assessing data quality and determining adjustment factors for incomplete reporting by facilities, then estimating local target populations based on interventions with near-universal coverage (first antenatal visit and first dose of pentavalent vaccine). We applied the method to Kenya at the county level, where routine electronic reporting by facilities is in place via the district health information software system. Reporting completeness for facility data were well above 80% in all 47 counties and the consistency of data over time was good. Coverage of the first dose of pentavalent vaccine, adjusted for facility reporting completeness, was used to obtain estimates of the county target populations for maternal and child health indicators. The country and national statistics for the four-year period 2012/13 to 2015/16 showed good consistency with results of the 2014 Kenya demographic and health survey. Our results indicated a stagnation of immunization coverage in almost all counties, a rapid increase of facility-based deliveries and caesarean sections and limited progress in antenatal care coverage. While surveys will continue to be necessary to provide population-based data, web-based information systems for health facility reporting provide an opportunity for more frequent, local monitoring of progress, in maternal and child health.
NASA Technical Reports Server (NTRS)
Zuev, V. E.; Kostin, B. S.; Naats, I. E.
1986-01-01
The methods of multifrequency laser sounding (MLS) are the most effective remote methods for investigating the atmospheric aerosols, since it is possible to obtain complete information on aerosol microstructure and the effective methods for estimating the aerosol optical constants can be developed. The MLS data interpretation consists in the solution of the set of equations containing those of laser sounding and equations for polydispersed optical characteristics. As a rule, the laser sounding equation is written in the approximation of single scattering and the equations for optical characteristics are written assuming that the atmospheric aerosol is formed by spherical and homogeneous particles. To remove the indeterminacy of equations, the method of optical sounding of atmospheric aerosol, consisting in a joint use of a mutifrequency lidar and a spectral photometer in common geometrical scheme of the optical experiment was suggested. The method is used for investigating aerosols in the cases when absorption by particles is small and indicates the minimum necessary for interpretation of a series of measurements.
Lunar PMAD technology assessment
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
1992-01-01
This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.
NASA Astrophysics Data System (ADS)
Wang, Jingcheng; Luo, Jingrun
2018-04-01
Due to the extremely high particle volume fraction (greater than 85%) and damage feature of polymer bonded explosives (PBXs), conventional micromechanical methods lead to inaccurate estimates on their effective elastic properties. According to their manufacture characteristics, a multistep approach based on micromechanical methods is proposed. PBXs are treated as pseudo poly-crystal materials consisting of equivalent composite particles (explosive crystals with binder coating), rather than two-phase composites composed of explosive particles and binder matrix. Moduli of composite spheres are obtained by generalized self-consistent method first, and the self-consistent method is modified to calculate the effective moduli of PBX. Defects and particle size distribution are considered by Mori-Tanaka method. Results show that when the multistep approach is applied to PBX 9501, estimates are far more accurate than the conventional micromechanical results. The bulk modulus is 5.75% higher, and shear modulus is 5.78% lower than the experimental values. Further analyses discover that while particle volume fraction and the binder's property have significant influences on the effective moduli of PBX, the moduli of particles present minor influences. Investigation of another particle size distribution indicates that the use of more fine particles will enhance the effective moduli of PBX.
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Inertial sensor-based smoother for gait analysis.
Suh, Young Soo
2014-12-17
An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).
Comparison of Brownian-dynamics-based estimates of polymer tension with direct force measurements.
Arsenault, Mark E; Purohit, Prashant K; Goldman, Yale E; Shuman, Henry; Bau, Haim H
2010-11-01
With the aid of brownian dynamics models, it is possible to estimate polymer tension by monitoring polymers' transverse thermal fluctuations. To assess the precision of the approach, brownian dynamics-based tension estimates were compared with the force applied to rhodamine-phalloidin labeled actin filaments bound to polymer beads and suspended between two optical traps. The transverse thermal fluctuations of each filament were monitored with a CCD camera, and the images were analyzed to obtain the filament's transverse displacement variance as a function of position along the filament, the filament's tension, and the camera's exposure time. A linear Brownian dynamics model was used to estimate the filament's tension. The estimated force was compared and agreed within 30% (when the tension <0.1 pN ) and 70% (when the tension <1 pN ) with the applied trap force. In addition, the paper presents concise asymptotic expressions for the mechanical compliance of a system consisting of a filament attached tangentially to bead handles (dumbbell system). The techniques described here can be used for noncontact estimates of polymers' and fibers' tension.
Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto
2018-03-29
Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user's experience and sensibility. The purpose of this study was to quantify the impact of user's subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t -test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies.
Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto
2018-01-01
Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user’s experience and sensibility. The purpose of this study was to quantify the impact of user’s subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t-test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies. PMID:29596376
Stoklosa, Michal; Ross, Hana
2014-05-01
To compare two different methods for estimating the size of the illicit cigarette market with each other and to contrast the estimates obtained by these two methods with the results of an industry-commissioned study. We used two observational methods: collection of data from packs in smokers' personal possession, and collection of data from packs discarded on streets. The data were obtained in Warsaw, Poland in September 2011 and October 2011. We used tests of independence to compare the results based on the two methods, and to contrast those with the estimate from the industry-commissioned discarded pack collection conducted in September 2011. We found that the proportions of cigarette packs classified as not intended for the Polish market estimated by our two methods were not statistically different. These estimates were 14.6% (95% CI 10.8% to 19.4%) using the survey data (N=400) and 15.6% (95% CI 13.2% to 18.4%) using the discarded pack data (N=754). The industry estimate (22.9%) was higher by nearly a half compared with our estimates, and this difference is statistically significant. Our findings are consistent with previous evidence of the tobacco industry exaggerating the scope of illicit trade and with the general pattern of the industry manipulating evidence to mislead the debate on tobacco control policy in many countries. Collaboration between governments and the tobacco industry to estimate tobacco tax avoidance and evasion is likely to produce upward-biased estimates of illicit cigarette trade. If governments are presented with industry estimates, they should strictly require a disclosure of all methodological details and data used in generating these estimates, and should seek advice from independent experts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Thermal radiative properties: Nonmetallic solids.
NASA Technical Reports Server (NTRS)
Touloukian, Y. S.; Dewitt, D. P.
1972-01-01
The volume consists of a text on theory, estimation, and measurement, together with its bibliography, the main body of numerical data and its references, and the material index. The text material assumes a role complementary to the main body of numerical data. The physics and basic concepts of thermal radiation are discussed in detail, focusing attention on treatment of nonmetallic materials: theory, estimation, and methods of measurement. Numerical data is presented in a comprehensive manner. The scope of coverage includes the nonmetallic elements and their compounds, intermetallics, polymers, glasses, and minerals. Analyzed data graphs provide an evaluative review of the data. All data have been obtained from their original sources, and each data set is so referenced.
Luo, Y.; Xu, Y.; Liu, Q.; Xia, J.
2008-01-01
In recent years, multichannel analysis of surface waves (MASW) has been increasingly used for obtaining vertical shear-wave velocity profiles within near-surface materials. MASW uses a multichannel recording approach to capture the time-variant, full-seismic wavefield where dispersive surface waves can be used to estimate near-surface S-wave velocity. The technique consists of (1) acquisition of broadband, high-frequency ground roll using a multichannel recording system; (2) efficient and accurate algorithms that allow the extraction and analysis of 1D Rayleigh-wave dispersion curves; (3) stable and efficient inversion algorithms for estimating S-wave velocity profiles; and (4) construction of the 2D S-wave velocity field map.
Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo
2013-04-01
A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.
NASA Technical Reports Server (NTRS)
Ganachaud, Alexandre; Wunsch, Carl; Kim, Myung-Chan; Tapley, Byron
1997-01-01
A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceanographic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations. Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.
Comparing potential recharge estimates from three Land Surface Models across the Western US
NIRAULA, REWATI; MEIXNER, THOMAS; AJAMI, HOORI; RODELL, MATTHEW; GOCHIS, DAVID; CASTRO, CHRISTOPHER L.
2018-01-01
Groundwater is a major source of water in the western US. However, there are limited recharge estimates available in this region due to the complexity of recharge processes and the challenge of direct observations. Land surface Models (LSMs) could be a valuable tool for estimating current recharge and projecting changes due to future climate change. In this study, simulations of three LSMs (Noah, Mosaic and VIC) obtained from the North American Land Data Assimilation System (NLDAS-2) are used to estimate potential recharge in the western US. Modeled recharge was compared with published recharge estimates for several aquifers in the region. Annual recharge to precipitation ratios across the study basins varied from 0.01–15% for Mosaic, 3.2–42% for Noah, and 6.7–31.8% for VIC simulations. Mosaic consistently underestimates recharge across all basins. Noah captures recharge reasonably well in wetter basins, but overestimates it in drier basins. VIC slightly overestimates recharge in drier basins and slightly underestimates it for wetter basins. While the average annual recharge values vary among the models, the models were consistent in identifying high and low recharge areas in the region. Models agree in seasonality of recharge occurring dominantly during the spring across the region. Overall, our results highlight that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating future recharge rates in data limited regions. PMID:29618845
Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N
2016-12-01
Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.
Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.
Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi
2018-05-28
Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.
Lunar gravity derived from long-period satellite motion, a proposed method
NASA Technical Reports Server (NTRS)
Ferrari, A. J.
1971-01-01
A method was devised to determine the spherical harmonic coefficients of the lunar gravity field. The method consists of a two-step data reduction and estimation process. Pseudo-Doppler data were generated simulating two different lunar orbits. The analysis included the perturbing effects of the L1 lunar gravity field, the earth, the sun, and solar radiation pressure. Orbit determinations were performed on these data and long-period orbital elements were obtained. The Kepler element rates from these solutions were used to recover L1 lunar gravity coefficients. Overall results of the experiment show that lunar gravity coefficients can be accurately determined and that the method is dynamically consistent with long-period perturbation theory.
NASA Astrophysics Data System (ADS)
Denis, C.; Ibrahim, A.
Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.
Thermodynamically self-consistent theory for the Blume-Capel model.
Grollau, S; Kierlik, E; Rosinberg, M L; Tarjus, G
2001-04-01
We use a self-consistent Ornstein-Zernike approximation to study the Blume-Capel ferromagnet on three-dimensional lattices. The correlation functions and the thermodynamics are obtained from the solution of two coupled partial differential equations. The theory provides a comprehensive and accurate description of the phase diagram in all regions, including the wing boundaries in a nonzero magnetic field. In particular, the coordinates of the tricritical point are in very good agreement with the best estimates from simulation or series expansion. Numerical and analytical analysis strongly suggest that the theory predicts a universal Ising-like critical behavior along the lambda line and the wing critical lines, and a tricritical behavior governed by mean-field exponents.
Electron distributions in the plasma sheet boundary layer - Time-of-flight effects
NASA Technical Reports Server (NTRS)
Onsager, T. G.; Thomsen, M. F.; Gosling, J. T.; Bame, S. J.
1990-01-01
The electron edge of the plasma sheet boundary layer lies lobeward of the ion edge. Measurements obtained near the electron edge of the boundary layer reveal low-speed cutoffs for earthward and tailward-flowing electrons. These cutoffs progress to lower speeds with deeper penetration into the boundary layer, and are consistently lower for the earthward-directed electrons than for the tailward-direction electrons. The cutoffs and their variation with distance from the edge of the boundary layer can be consistently interpreted in terms of a time-of-flight effect on recently reconnected magnetic field lines. The observed cutoff speeds are used to estimate the downtail location of the reconnection site.
Robust versus consistent variance estimators in marginal structural Cox models.
Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris
2018-06-11
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.
Demand for health care in Denmark: results of a national sample survey using contingent valuation.
Gyldmark, M; Morrison, G C
2001-10-01
In this paper we use willingness to pay (WTP) to elicit values for private insurance covering treatment for four different health problems. By way of obtaining these values, we test the viability of the contingent valuation method (CVM) and econometric techniques, respectively, as means of eliciting and analysing values from the general public. WTP responses from a Danish national sample survey, which was designed in accordance with existing guidelines, are analysed in terms of consistency and validity checks. Large numbers of zero responses are common in WTP studies, and are found here; therefore, the Heckman selectivity model and log-transformed OLS are employed. The selectivity model is rejected, but test results indicate that the lognormal model yields efficient and unbiased estimates. The results give confidence in the WTP estimates obtained and, more generally, in CVM as a means of valuing publicly provided goods and in econometrics as a tool for analysing WTP results containing many zero responses.
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
Elastic-wave velocity in marine sediments with gas hydrates: Effective medium modeling
Helgerud, M.B.; Dvorkin, J.; Nur, A.; Sakai, A.; Collett, T.
1999-01-01
We offer a first-principle-based effective medium model for elastic-wave velocity in unconsolidated, high porosity, ocean bottom sediments containing gas hydrate. The dry sediment frame elastic constants depend on porosity, elastic moduli of the solid phase, and effective pressure. Elastic moduli of saturated sediment are calculated from those of the dry frame using Gassmann's equation. To model the effect of gas hydrate on sediment elastic moduli we use two separate assumptions: (a) hydrate modifies the pore fluid elastic properties without affecting the frame; (b) hydrate becomes a component of the solid phase, modifying the elasticity of the frame. The goal of the modeling is to predict the amount of hydrate in sediments from sonic or seismic velocity data. We apply the model to sonic and VSP data from ODP Hole 995 and obtain hydrate concentration estimates from assumption (b) consistent with estimates obtained from resistivity, chlorinity and evolved gas data. Copyright 1999 by the American Geophysical Union.
Measurement of Blood Pressure Using an Arterial Pulsimeter Equipped with a Hall Device
Lee, Sang-Suk; Nam, Dong-Hyun; Hong, You-Sik; Lee, Woo-Beom; Son, Il-Ho; Kim, Keun-Ho; Choi, Jong-Gu
2011-01-01
To measure precise blood pressure (BP) and pulse rate without using a cuff, we have developed an arterial pulsimeter consisting of a small, portable apparatus incorporating a Hall device. Regression analysis of the pulse wave measured during testing of the arterial pulsimeter was conducted using two equations of the BP algorithm. The estimated values of BP obtained by the cuffless arterial pulsimeter over 5 s were compared with values obtained using electronic or liquid mercury BP meters. The standard deviation between the estimated values and the measured values for systolic and diastolic BP were 8.3 and 4.9, respectively, which are close to the range of values of the BP International Standard. Detailed analysis of the pulse wave measured by the cuffless radial artery pulsimeter by detecting changes in the magnetic field can be used to develop a new diagnostic algorithm for BP, which can be applied to new medical apparatus such as the radial artery pulsimeter. PMID:22319381
NASA Astrophysics Data System (ADS)
Dechant, B.; Ryu, Y.; Jiang, C.; Yang, K.
2017-12-01
Solar-induced chlorophyll fluorescence (SIF) is rapidly becoming an important tool to remotely estimate terrestrial gross primary productivity (GPP) at large spatial scales. Many findings, however, are based on empirical relationships between SIF and GPP that have been found to be dependent on plant functional types. Therefore, combining model-based analysis with observations is crucial to improve our understanding of SIF-GPP relationships. So far, most model-based results were based on SCOPE, a complex ecophysiological model with explicit description of canopy layers and a large number of parameters that may not be easily obtained reliably on large scales. Here, we report on our efforts to incorporate SIF into a two-big leaf (sun and shade) process-based model that is suitable for obtaining its inputs entirely from satellite products. We examine if the SIF-GPP relationships are consistent with the findings from SCOPE simulations and investigate if incorporation of the SIF signal into BESS can help improve GPP estimation. A case study in a rice paddy is presented.
A new estimation of HD/2H2 at high redshift using the spectrum of the quasar J 2123-0050
NASA Astrophysics Data System (ADS)
Klimenko, V. V.; Balashev, S. A.; Ivanchik, A. V.; Varshalovich, D. A.
2015-12-01
We present a new analysis of the quasar spectrum J 2123-0050 obtained using VLT/UVES. The H2/HD absorption system at z = 2.059 was analysed. This system consists of two subsystems with zA = 2.05933 and zB = 2.05955. The HD lines have been detected only in subsystem A with the column density of log N = 13.87 ± 0.06. We have determined the column density of H2 in this subsystem, log N = 17.93 ± 0.01, which is about three times larger than estimation derived early from analyses of quasar spectrum obtained using KECK/HIRES [1]. The derived ratio HD/2H2 = (4.28 ± 0.60) × 10-5 is the largest in quasar spectra, nevertheless it coincides with the primordial deuterium abundance within 2σ error. Additionally, we have found some evidence in the partial covering effect for the H2 system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurujjaman, Md.; Narayanan, Ramesh; Iyengar, A. N. Sekar
2009-10-15
Continuous wavelet transform (CWT) based time-scale and multifractal analyses have been carried out on the anode glow related nonlinear floating potential fluctuations in a hollow cathode glow discharge plasma. CWT has been used to obtain the contour and ridge plots. Scale shift (or inversely frequency shift), which is a typical nonlinear behavior, has been detected from the undulating contours. From the ridge plots, we have identified the presence of nonlinearity and degree of chaoticity. Using the wavelet transform modulus maxima technique we have obtained the multifractal spectrum for the fluctuations at different discharge voltages and the spectrum was observed tomore » become a monofractal for periodic signals. These multifractal spectra were also used to estimate different quantities such as the correlation and fractal dimension, degree of multifractality, and complexity parameters. These estimations have been found to be consistent with the nonlinear time series analysis.« less
NASA Astrophysics Data System (ADS)
Gao, K.; van Dommelen, J. A. W.; Göransson, P.; Geers, M. G. D.
2015-09-01
In this paper, a homogenization method is proposed to obtain the parameters of Biot's poroelastic theory from a multiscale perspective. It is assumed that the behavior of a macroscopic material point can be captured through the response of a microscopic Representative Volume Element (RVE) consisting of both a solid skeleton and a gaseous fluid. The macroscopic governing equations are assumed to be Biot's poroelastic equations and the RVE is governed by the conservation of linear momentum and the adopted linear constitutive laws under the isothermal condition. With boundary conditions relying on the macroscopic solid displacement and fluid pressure, the homogenized solid stress and fluid displacement are obtained based on energy consistency. This homogenization framework offers an approach to obtain Biot's parameters directly through the response of the RVE in the regime of Darcy's flow where the pressure gradient is dominating. A numerical experiment is performed in the form of a sound absorption test on a porous material with an idealized partially open microstructure that is described by Biot's equations where the parameters are obtained through the proposed homogenization approach. The result is evaluated by comparison with Direct Numerical Simulations (DNS), showing a superior performance of this approach compared to an alternative semi-phenomenological model for estimating Biot's parameters of the studied porous material.
Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel
2014-01-01
Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses. PMID:25821577
Estimating normal mixture parameters from the distribution of a reduced feature vector
NASA Technical Reports Server (NTRS)
Guseman, L. F.; Peters, B. C., Jr.; Swasdee, M.
1976-01-01
A FORTRAN computer program was written and tested. The measurements consisted of 1000 randomly chosen vectors representing 1, 2, 3, 7, and 10 subclasses in equal portions. In the first experiment, the vectors are computed from the input means and covariances. In the second experiment, the vectors are 16 channel measurements. The starting covariances were constructed as if there were no correlation between separate passes. The biases obtained from each run are listed.
Chiggiato, Jacopo; Zavatarelli, Marco; Castellari, Sergio; Deserti, Marco
2005-12-15
Surface heat fluxes of the Adriatic Sea are estimated for the period 1998-2001 through bulk formulae with the goal to assess the uncertainties related to their estimations and to describe their interannual variability. In addition a comparison to observations is conducted. We computed the components of the sea surface heat budget by using two different operational meteorological data sets as inputs: the ECMWF operational analysis and the regional limited area model LAMBO operational forecast. Both results are consistent with previous long-term climatology and short-term analyses present in the literature. In both cases we obtained that the Adriatic Sea loses 26 W/m2 on average, that is consistent with the assessments found in the literature. Then we conducted a comparison with observations of the radiative components of the heat budget collected on offshore platforms and one coastal station. In the case of shortwave radiation, results show a little overestimation on the annual basis. Values obtained in this case are 172 W/m2 when using ECMWF data and 169 W/m2 when using LAMBO data. The use of either Schiano's or Gilman's and Garrett's corrections help to get even closer values. More difficult is to assess the comparison in the case of longwave radiation, with relative errors of an order of 10-20%.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
Tropical forest plantation biomass estimation using RADARSAT-SAR and TM data of south china
NASA Astrophysics Data System (ADS)
Wang, Chenli; Niu, Zheng; Gu, Xiaoping; Guo, Zhixing; Cong, Pifu
2005-10-01
Forest biomass is one of the most important parameters for global carbon stock model yet can only be estimated with great uncertainties. Remote sensing, especially SAR data can offers the possibility of providing relatively accurate forest biomass estimations at a lower cost than inventory in study tropical forest. The goal of this research was to compare the sensitivity of forest biomass to Landsat TM and RADARSAT-SAR data and to assess the efficiency of NDVI, EVI and other vegetation indices in study forest biomass based on the field survey date and GIS in south china. Based on vegetation indices and factor analysis, multiple regression and neural networks were developed for biomass estimation for each species of the plantation. For each species, the better relationships between the biomass predicted and that measured from field survey was obtained with a neural network developed for the species. The relationship between predicted and measured biomass derived from vegetation indices differed between species. This study concludes that single band and many vegetation indices are weakly correlated with selected forest biomass. RADARSAT-SAR Backscatter coefficient has a relatively good logarithmic correlation with forest biomass, but neither TM spectral bands nor vegetation indices alone are sufficient to establish an efficient model for biomass estimation due to the saturation of bands and vegetation indices, multiple regression models that consist of spectral and environment variables improve biomass estimation performance. Comparing with TM, a relatively well estimation result can be achieved by RADARSAT-SAR, but all had limitations in tropical forest biomass estimation. The estimation results obtained are not accurate enough for forest management purposes at the forest stand level. However, the approximate volume estimates derived by the method can be useful in areas where no other forest information is available. Therefore, this paper provides a better understanding of relationships of remote sensing data and forest stand parameters used in forest parameter estimation models.
NASA Astrophysics Data System (ADS)
Lambert, S. B.; Ziegler, Y.; Rosat, S.; Bizouard, C.
2017-12-01
Nutation time series derived from very long baseline interferometry (VLBI) and time varying surface gravity data recorded by superconducting gravimeters (SG) have long been used separately to assess the Earth's interior via the estimation of the free core and inner core resonance effects on nutation or tidal gravity. The results obtained from these two techniques have shown recently to be consistent, making relevant the combination of VLBI and SG observables and the estimation of Earth's interior parameters in a single inversion. We present here the results of combining nutation and surface gravity time series to improve estimates of the Earth's core and inner core resonant frequencies. We use VLBI nutation time series spanning 1984-2016 derived by several analysis centers affiliated to the International VLBI Service for Geodesy and Astrometry, together with surface gravity data from about 15 SG stations. We address the resonance model used for describing the Earth's interior response to tidal excitation, the data preparation consisting of the error recalibration and amplitude fitting to nutation data, and processing of SG time-varying gravity to remove any gaps, spikes, steps and other disturbances, followed by the tidal analysis with the ETERNA 3.4 software package. New estimates of the resonant periods are proposed and correlations between the parameters are investigated.
Resting State Network Estimation in Individual Subjects
Hacker, Carl D.; Laumann, Timothy O.; Szrama, Nicholas P.; Baldassarre, Antonello; Snyder, Abraham Z.
2014-01-01
Resting-state functional magnetic resonance imaging (fMRI) has been used to study brain networks associated with both normal and pathological cognitive function. The objective of this work is to reliably compute resting state network (RSN) topography in single participants. We trained a supervised classifier (multi-layer perceptron; MLP) to associate blood oxygen level dependent (BOLD) correlation maps corresponding to pre-defined seeds with specific RSN identities. Hard classification of maps obtained from a priori seeds was highly reliable across new participants. Interestingly, continuous estimates of RSN membership retained substantial residual error. This result is consistent with the view that RSNs are hierarchically organized, and therefore not fully separable into spatially independent components. After training on a priori seed-based maps, we propagated voxel-wise correlation maps through the MLP to produce estimates of RSN membership throughout the brain. The MLP generated RSN topography estimates in individuals consistent with previous studies, even in brain regions not represented in the training data. This method could be used in future studies to relate RSN topography to other measures of functional brain organization (e.g., task-evoked responses, stimulation mapping, and deficits associated with lesions) in individuals. The multi-layer perceptron was directly compared to two alternative voxel classification procedures, specifically, dual regression and linear discriminant analysis; the perceptron generated more spatially specific RSN maps than either alternative. PMID:23735260
Fourier rebinning and consistency equations for time-of-flight PET planograms
Li, Yusheng; Defrise, Michel; Matej, Samuel; Metzler, Scott D
2016-01-01
Due to the unique geometry, dual-panel PET scanners have many advantages in dedicated breast imaging and on-board imaging applications since the compact scanners can be combined with other imaging and treatment modalities. The major challenges of dual-panel PET imaging are the limited-angle problem and data truncation, which can cause artifacts due to incomplete data sampling. The time-of-flight (TOF) information can be a promising solution to reduce these artifacts. The TOF planogram is the native data format for dual-panel TOF PET scanners, and the non-TOF planogram is the 3D extension of linogram. The TOF planograms is five-dimensional while the objects are three-dimensional, and there are two degrees of redundancy. In this paper, we derive consistency equations and Fourier-based rebinning algorithms to provide a complete understanding of the rich structure of the fully 3D TOF planograms. We first derive two consistency equations and John's equation for 3D TOF planograms. By taking the Fourier transforms, we obtain two Fourier consistency equations and the Fourier-John equation, which are the duals of the consistency equations and John's equation, respectively. We then solve the Fourier consistency equations and Fourier-John equation using the method of characteristics. The two degrees of entangled redundancy of the 3D TOF data can be explicitly elicited and exploited by the solutions along the characteristic curves. As the special cases of the general solutions, we obtain Fourier rebinning and consistency equations (FORCEs), and thus we obtain a complete scheme to convert among different types of PET planograms: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF planograms. The FORCEs can be used as Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. As a byproduct, we show the two consistency equations are necessary and sufficient for 3D TOF planograms. Finally, we give numerical examples of implementation of a fast 2D TOF planogram projector and Fourier-based rebinning for a 2D TOF planograms using the FORCEs to show the efficacy of the Fourier-based solutions. PMID:28255191
Fourier rebinning and consistency equations for time-of-flight PET planograms.
Li, Yusheng; Defrise, Michel; Matej, Samuel; Metzler, Scott D
2016-01-01
Due to the unique geometry, dual-panel PET scanners have many advantages in dedicated breast imaging and on-board imaging applications since the compact scanners can be combined with other imaging and treatment modalities. The major challenges of dual-panel PET imaging are the limited-angle problem and data truncation, which can cause artifacts due to incomplete data sampling. The time-of-flight (TOF) information can be a promising solution to reduce these artifacts. The TOF planogram is the native data format for dual-panel TOF PET scanners, and the non-TOF planogram is the 3D extension of linogram. The TOF planograms is five-dimensional while the objects are three-dimensional, and there are two degrees of redundancy. In this paper, we derive consistency equations and Fourier-based rebinning algorithms to provide a complete understanding of the rich structure of the fully 3D TOF planograms. We first derive two consistency equations and John's equation for 3D TOF planograms. By taking the Fourier transforms, we obtain two Fourier consistency equations and the Fourier-John equation, which are the duals of the consistency equations and John's equation, respectively. We then solve the Fourier consistency equations and Fourier-John equation using the method of characteristics. The two degrees of entangled redundancy of the 3D TOF data can be explicitly elicited and exploited by the solutions along the characteristic curves. As the special cases of the general solutions, we obtain Fourier rebinning and consistency equations (FORCEs), and thus we obtain a complete scheme to convert among different types of PET planograms: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF planograms. The FORCEs can be used as Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. As a byproduct, we show the two consistency equations are necessary and sufficient for 3D TOF planograms. Finally, we give numerical examples of implementation of a fast 2D TOF planogram projector and Fourier-based rebinning for a 2D TOF planograms using the FORCEs to show the efficacy of the Fourier-based solutions.
Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade
2014-01-01
This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity.
Compact Groups analysis using weak gravitational lensing II: CFHT Stripe 82 data
NASA Astrophysics Data System (ADS)
Chalela, Martín; Gonzalez, Elizabeth Johana; Makler, Martín; Lambas, Diego García; Pereira, Maria E. S.; O'mill, Ana; Shan, HuanYuan
2018-06-01
In this work we present a lensing study of Compact Groups (CGs) using data obtained from the high quality Canada-France-Hawaii Telescope Stripe 82 Survey. Using stacking techniques we obtain the average density contrast profile. We analyse the lensing signal dependence on the groups surface brightness and morphological content, for CGs in the redshift range z = 0.2 - 0.4. We obtain a larger lensing signal for CGs with higher surface brightness, probably due to their lower contamination by interlopers. Also, we find a strong dependence of the lensing signal on the group concentration parameter, with the most concentrated quintile showing a significant lensing signal, consistent with an isothermal sphere with σV = 336 ± 28 km/s and a NFW profile with R200 = 0.60 ± 0.05 h_{70}^{-1}Mpc. We also compare lensing results with dynamical estimates finding a good agreement with lensing determinations for CGs with higher surface brightness and higher concentration indexes. On the other hand, CGs that are more contaminated by interlopers show larger dynamical dispersions, since interlopers bias dynamical estimates to larger values, although the lensing signal is weakened.
[Using neural networks based template matching method to obtain redshifts of normal galaxies].
Xu, Xin; Luo, A-li; Wu, Fu-chao; Zhao, Yong-heng
2005-06-01
Galaxies can be divided into two classes: normal galaxy (NG) and active galaxy (AG). In order to determine NG redshifts, an automatic effective method is proposed in this paper, which consists of the following three main steps: (1) From the template of normal galaxy, the two sets of samples are simulated, one with the redshift of 0.0-0.3, the other of 0.3-0.5, then the PCA is used to extract the main components, and train samples are projected to the main component subspace to obtain characteristic spectra. (2) The characteristic spectra are used to train a Probabilistic Neural Network to obtain a Bayes classifier. (3) An unknown real NG spectrum is first inputted to this Bayes classifier to determine the possible range of redshift, then the template matching is invoked to locate the redshift value within the estimated range. Compared with the traditional template matching technique with an unconstrained range, our proposed method not only halves the computational load, but also increases the estimation accuracy. As a result, the proposed method is particularly useful for automatic spectrum processing produced from a large-scale sky survey project.
Hidden Markov analysis of mechanosensitive ion channel gating.
Khan, R Nazim; Martinac, Boris; Madsen, Barry W; Milne, Robin K; Yeo, Geoffrey F; Edeson, Robert O
2005-02-01
Patch clamp data from the large conductance mechanosensitive channel (MscL) in E. coli was studied with the aim of developing a strategy for statistical analysis based on hidden Markov models (HMMs) and determining the number of conductance levels of the channel, together with mean current, mean dwell time and equilibrium probability of occupancy for each level. The models incorporated state-dependent white noise and moving average adjustment for filtering, with maximum likelihood parameter estimates obtained using an EM (expectation-maximisation) based iteration. Adjustment for filtering was included as it could be expected that the electronic filter used in recording would have a major effect on obviously brief intermediate conductance level sojourns. Preliminary data analysis revealed that the brevity of intermediate level sojourns caused difficulties in assignment of data points to levels as a result of over-estimation of noise variances. When reasonable constraints were placed on these variances using the better determined noise variances for the closed and fully open levels, idealisation anomalies were eliminated. Nevertheless, simulations suggested that mean sojourn times for the intermediate levels were still considerably over-estimated, and that recording bandwidth was a major limitation; improved results were obtained with higher bandwidth data (10 kHz sampled at 25 kHz). The simplest model consistent with these data had four open conductance levels, intermediate levels being approximately 20%, 51% and 74% of fully open. The mean lifetime at the fully open level was about 1 ms; estimates for the three intermediate levels were 54-92 micros, probably still over-estimates.
Refractive indices used by the Haag-Streit Lenstar to calculate axial biometric dimensions.
Suheimat, Marwan; Verkicharla, Pavan K; Mallen, Edward A H; Rozema, Jos J; Atchison, David A
2015-01-01
To estimate refractive indices used by the Lenstar biometer to translate measured optical path lengths into geometrical path lengths within the eye. Axial lengths of model eyes were determined using the IOLMaster and Lenstar biometers; comparing those lengths gave an overall eye refractive index estimate for the Lenstar. Using the Lenstar Graphical User Interface, we noticed that boundaries between media could be manipulated and opposite changes in optical path lengths on either side of the boundary could be introduced. Those ratios were combined with the overall eye refractive index to estimate separate refractive indices. Furthermore, Haag-Streit provided us with a template to obtain 'air thicknesses' to compare with geometrical distances. The axial length estimates obtained using the IOLMaster and the Lenstar agreed to within 0.01 mm. Estimates of group refractive indices used in the Lenstar were 1.340, 1.341, 1.415, and 1.354 for cornea, aqueous, lens, and overall eye, respectively. Those refractive indices did not match those of schematic eyes, but were close in the cases of aqueous and lens. Linear equations relating air thicknesses to geometrical thicknesses were consistent with our findings. The Lenstar uses different refractive indices for different ocular media. Some of the refractive indices, such as that for the cornea, are not physiological; therefore, it is likely that the calibrations in the instrument correspond to instrument-specific corrections and are not the real optical path lengths. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Cancer incidence attributable to excess body weight in Alberta in 2012
Brenner, Darren R.; Poirier, Abbey E.; Grundy, Anne; Khandwala, Farah; McFadden, Alison; Friedenreich, Christine M.
2017-01-01
Background: Excess body weight has been consistently associated with colorectal, breast, endometrial, esophageal, gall bladder, pancreatic and kidney cancers. The objective of this analysis was to estimate the proportion of total and site-specific cancers attributable to excess body weight in adults in Alberta in 2012. Methods: We estimated the proportions of attributable cancers using population attributable risk. Risk estimates were obtained from recent meta-analyses, and exposure prevalence estimates were obtained from the Canadian Community Health Survey. People with a body mass index of 25.00-29.99 kg/m2 and of 30 kg/m2 or more were categorized as overweight and obese, respectively. Results: About 14%-47% of men and 9%-35% of women in Alberta were classified as either overweight or obese; the proportion increased with increasing age for both sexes. We estimate that roughly 17% and 12% of obesity-related cancers among men and women, respectively, could be attributed to excess body weight in Alberta in 2012. The heaviest absolute burden in terms of number of cases was seen for breast cancer among women and for colorectal cancer among men. Overall, about 5% of all cancers in adults in Alberta in 2012 were estimated to be attributable to excess body weight in 2000-2003. Interpretation: Excess body weight contributes to a substantial proportion of cases of cancers associated with overweight and obesity annually in Alberta. Strategies to improve energy imbalance and reduce the proportion of obese and overweight Albertans may have a notable impact on cancer incidence in the future. PMID:28455439
Noninvasive measurement of beat-to-beat arterial blood pressure by the Korotkoff sound delay time.
Xiang, Haiyan; Liu, Yanyong; Li, Yinhua; Qin, Yufei; Yu, Mengsun
2012-02-01
To propose a novel noninvasive beat-to-beat arterial blood pressure measurement method based on the Korotkoff sound delay time (KDT) and evaluate its accuracy in preliminary experiments. KDT decreases as the cuff pressure P deflates, which can be described by a function KDT=f (P). Actually, KDT is a function of arterial transmural pressure. Therefore, the variation in blood pressure can be obtained by the transmural pressure, which is estimated by the KDT. Holding the cuff pressure at an approximate constant pressure between systolic pressure and diastolic pressure, the variation in blood pressure ΔEBP between successive heartbeats can be estimated according to KDT and f'(p), which represents the variation of KDT corresponding to unit pressure. Then the blood pressure for each heartbeat can be obtained by accumulating the ΔEBP. Invasive and noninvasive blood pressure values of six participants were measured simultaneously to evaluate the method. The average of the correlation coefficients between the invasive mean arterial pressure (MAP) and the KDT for six participants was -0.91. The average of the correlation coefficients between the invasive MAP and the estimated mean blood pressure (EBP) was 0.92. The mean difference between EBP and MAP was 0.51 mmHg, and the SD was 2.65 mmHg. The mean blood pressure estimated by the KDT is consistent with the invasive MAP. The beat-to-beat blood pressure estimated by KDT provides an accurate estimate of MAP in the preliminary experiments and represents a potential acceptable alternative to invasive blood pressure monitoring during laboratory studies.
Cancer incidence attributable to excess body weight in Alberta in 2012.
Brenner, Darren R; Poirier, Abbey E; Grundy, Anne; Khandwala, Farah; McFadden, Alison; Friedenreich, Christine M
2017-04-28
Excess body weight has been consistently associated with colorectal, breast, endometrial, esophageal, gall bladder, pancreatic and kidney cancers. The objective of this analysis was to estimate the proportion of total and site-specific cancers attributable to excess body weight in adults in Alberta in 2012. We estimated the proportions of attributable cancers using population attributable risk. Risk estimates were obtained from recent meta-analyses, and exposure prevalence estimates were obtained from the Canadian Community Health Survey. People with a body mass index of 25.00-29.99 kg/m2 and of 30 kg/m2 or more were categorized as overweight and obese, respectively. About 14%-47% of men and 9%-35% of women in Alberta were classified as either overweight or obese; the proportion increased with increasing age for both sexes. We estimate that roughly 17% and 12% of obesity-related cancers among men and women, respectively, could be attributed to excess body weight in Alberta in 2012. The heaviest absolute burden in terms of number of cases was seen for breast cancer among women and for colorectal cancer among men. Overall, about 5% of all cancers in adults in Alberta in 2012 were estimated to be attributable to excess body weight in 2000-2003. Excess body weight contributes to a substantial proportion of cases of cancers associated with overweight and obesity annually in Alberta. Strategies to improve energy imbalance and reduce the proportion of obese and overweight Albertans may have a notable impact on cancer incidence in the future. Copyright 2017, Joule Inc. or its licensors.
Collisionless absorption of intense laser radiation in nanoplasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaretsky, D F; Korneev, Philipp A; Popruzhenko, Sergei V
The rate of linear collisionless absorption of an electromagnetic radiation in a nanoplasma - classical electron gas localised in a heated ionised nanosystem (thin film or cluster) irradiated by an intense femtosecond laser pulse - is calculated. The absorption is caused by the inelastic electron scattering from the self-consistent potential of the system in the presence of a laser field. The effect proves to be appreciable because of a small size of the systems. General expressions are obtained for the absorption rate as a function of the parameters of the single-particle self-consistent potential and electron distribution function in the regimemore » linear in field. For the simplest cases, where the self-consistent field is created by an infinitely deep well or an infinite charged plane, closed analytic expressions are obtained for the absorption rate. Estimates presented in the paper demonstrate that, over a wide range of the parameters of laser pulses and nanostructures, the collisionless mechanism of heating electron subsystem can be dominant. The possibility of experimental observation of the collisionless absorption of intense laser radiation in nanoplasma is also discussed. (interaction of laser radiation with matter)« less
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Classification with asymmetric label noise: Consistency and maximal denoising
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Gilles; Flaska, Marek; Handy, Gregory
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Sparse brain network using penalized linear regression
NASA Astrophysics Data System (ADS)
Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.
2011-03-01
Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.
Combining multi-atlas segmentation with brain surface estimation
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Carass, Aaron; Resnick, Susan M.; Pham, Dzung L.; Prince, Jerry L.; Landman, Bennett A.
2016-03-01
Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitation in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.
Combining Multi-atlas Segmentation with Brain Surface Estimation.
Huo, Yuankai; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A
2016-02-27
Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitations in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.
Ultraviolet imaging telescope and optical emission-line observations of H II regions in M81
NASA Technical Reports Server (NTRS)
Hill, Jesse K.; Cheng, K.-P.; Bohlin, Ralph C.; Cornett, Robert H.; Hintzen, P. M. N.; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Smith, Eric P.; Stecher, Theodore P.
1995-01-01
Images of the type Sab spiral galaxy M81 were obtained in far-UV and near-UV bands by the Ultraviolet Imaging Telescope (UIT) during the Astro-1 Spacelab mission of 1990 December. Magnitudes in the two UV bands are determined for 52 H II regions from the catalog of Petit, Sivan, & Karachentsev (1988). Fluxes of the H-alpha and H-beta emission lines are determined from CCD images. Extinctions for the brightest H II regions are determined from observed Balmer decrements. Fainter H II regions are assigned the average of published radio-H-alpha extinctions for several bright H II regions. The radiative transfer models of Witt, Thronson, & Capuano (1992) are shown to predict a relationship between Balmer Decrement and H-alpha extinction consistent with observed line and radio fluxes for the brightest 7 H II regions and are used to estimate the UV extinction. Ratios of Lyman continuum with ratios predicted by model spectra computed for initial mass function (IMF) slope equal to -1.0 and stellar masses ranging from 5 to 120 solar mass. Ages and masses are estimated by comparing the H-alpha and far-UV fluxes and their ratio with the models. The total of the estimated stellar masses for the 52 H II regions is 1.4 x 10(exp 5) solar mass. The star-formation rate inferred for M81 from the observed UV and H-alpha fluxes is low for a spiral galaxy at approximately 0.13 solar mass/yr, but consistent with the low star-formation rates obtained by Kennicutt (1983) and Caldwell et al. (1991) for early-type spirals.
NASA Astrophysics Data System (ADS)
Choi, W.; Faloona, I. C.; McKay, M.; Goldstein, A. H.; Baker, B.
2011-07-01
The atmospheric boundary layer (ABL) height (zi) over complex, forested terrain is estimated based on the power spectra and the integral length scale of cross-stream winds obtained from a three-axis sonic anemometer during the two summers of the BEARPEX (Biosphere Effects on Aerosol and Photochemistry) Experiment. The zi values estimated with this technique show very good agreement with observations obtained from balloon tether sondes (2007) and rawinsondes (2009) under unstable conditions (z/L < 0) at the coniferous forest in the California Sierra Nevada. On the other hand, the low frequency behavior of the streamwise upslope winds did not exhibit significant variations and was therefore not useful in predicting boundary layer height. The behavior of the nocturnal boundary layer height (h) with respect to the power spectra of the v-wind component and temperature under stable conditions (z/L > 0) is also presented. The nocturnal boundary layer height is found to be fairly well predicted by a recent interpolation formula proposed by Zilitinkevich et al. (2007), although it was observed to only vary from 60-80 m during the 2009 experiment in which it was measured. Finally, significant directional wind shear was observed during both day and night soundings. The winds were found to be consistently backing from the prevailing west-southwesterlies within the ABL (the anabatic cross-valley circulation) to southerlies in a layer ~1-2 km thick just above the ABL before veering to the prevailing westerlies further aloft. This shear pattern is shown to be consistent with the forcing of a thermal wind driven by the regional temperature gradient directed east-southeast in the lower troposphere.
THE SPLASH SURVEY: SPECTROSCOPY OF 15 M31 DWARF SPHEROIDAL SATELLITE GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tollerud, Erik J.; Bullock, James S.; Yniguez, Basilio
2012-06-10
We present a resolved star spectroscopic survey of 15 dwarf spheroidal (dSph) satellites of the Andromeda galaxy (M31). We filter foreground contamination from Milky Way (MW) stars, noting that MW substructure is evident in this contaminant sample. We also filter M31 halo field giant stars and identify the remainder as probable dSph members. We then use these members to determine the kinematical properties of the dSphs. For the first time, we confirm that And XVIII, XXI, and XXII show kinematics consistent with bound, dark-matter-dominated galaxies. From the velocity dispersions for the full sample of dSphs we determine masses, which wemore » combine with the size and luminosity of the galaxies to produce mass-size-luminosity scaling relations. With these scalings we determine that the M31 dSphs are fully consistent with the MW dSphs, suggesting that the well-studied MW satellite population provides a fair sample for broader conclusions. We also estimate dark matter halo masses of the satellites and find that there is no sign that the luminosity of these galaxies depends on their dark halo mass, a result consistent with what is seen for MW dwarfs. Two of the M31 dSphs (And XV, XVI) have estimated maximum circular velocities smaller than 12 km s{sup -1} (to 1{sigma}), which likely places them within the lowest-mass dark matter halos known to host stars (along with Booetes I of the MW). Finally, we use the systemic velocities of the M31 satellites to estimate the mass of the M31 halo, obtaining a virial mass consistent with previous results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batcheller, Thomas Aquinas; Taylor, Dean Dalton
Idaho Nuclear Technology and Engineering Center 300,000-gallon vessel WM-189 was filled in late 2001 with concentrated sodium bearing waste (SBW). Three airlifted liquid samples and a steam jetted slurry sample were obtained for quantitative analysis and characterization of WM-189 liquid phase SBW and tank heel sludge. Estimates were provided for most of the reported data values, based on the greater of (a) analytical uncertainty, and (b) variation of analytical results between nominally similar samples. A consistency check on the data was performed by comparing the total mass of dissolved solids in the liquid, as measured gravimetrically from a dried sample,more » with the corresponding value obtained by summing the masses of cations and anions in the liquid, based on the reported analytical data. After reasonable adjustments to the nitrate and oxygen concentrations, satisfactory consistency between the two results was obtained. A similar consistency check was performed on the reported compositional data for sludge solids from the steam jetted sample. In addition to the compositional data, various other analyses were performed: particle size distribution was measured for the sludge solids, sludge settling tests were performed, and viscosity measurements were made. WM-189 characterization results were compared with those for WM-180, and other Tank Farm Facility tank characterization data. A 2-liter batch of WM-189 simulant was prepared and a clear, stable solution was obtained, based on a general procedure for mixing SBW simulant that was develop by Dr. Jerry Christian. This WM-189 SBW simulant is considered suitable for laboratory testing for process development.« less
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property
Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen
2010-01-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586
Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method
NASA Technical Reports Server (NTRS)
Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.
1984-01-01
The design of the Thematic Mapper (TM) multispectral radiometer makes it susceptible to band-to-band misregistration. To estimate band-to-band misregistration a block correlation method is employed. This method is chosen over other possible techniques (band differencing and flickering) because quantitative results are produced. The method correlates rectangular blocks of pixels from one band against blocks centered on identical pixels from a second band. The block pairs are shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient for each shift position is computed. The displacement corresponding to the maximum correlation is taken as the best estimate of registration error for each block pair. Subpixel shifts are estimated by a bi-quadratic interpolation of the correlation values surrounding the maximum correlation. To obtain statistical summaries for each band combination post processing of the block correlation results performed. The method results in estimates of registration error that are consistent with expectations.
Estimation of position resolution for DOI-PET detector using diameter 0.2 mm WLS fibers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaneko, Naomi; Ito, Hiroshi; Kawai, Hideyuki
We have been developing sub mm resolution andmore » $$ 1 million DOI-PET detector using wavelength shifting fibers (WLSF), scintillation crystals of plate shape and SiPM (MPPC: HAMAMATSU K. K.). Conventional design of DOI-PET detector is obtained about mm{sup 3} of resolution by using some blocks detecting gamma-ray in mm 3 voxel. It requires the production cost of $$ a few ten million or more for high technique of processing crystal and a lot of number of photo-devices, and this technology is reaching the limit of the resolution. Both higher resolution and lower cost of DOI-PET detector production is challenging for PET diagnosis population. We propose two type of detector. One is a whole body PET system, and the other for brain or small animal. Both PET system consist 6 blocks. the former consist of 6 layers 300 mm x 300 mm x 4 mm crystal plate. The latter consist 16 crystal layers, 4 x 4 crystal array. The size of crystal plate is 40 mm x 40 mm x 1 mm.The WLSF sheets connect to upper and lower plane. The whole PET systems connect 8 SiPMs are bonded on each side. For the brain PET, 9 WLSF fibers are bond on the each side. The expected position resolution maybe less than 1 mm at the former. We have estimation experimental performance the system using {sup 22}Na radioactive source. The collection efficiency of WLSF (R-3) sheet was achieved 10% with GAGG at 511 keV. The relation between reconstruction position and incident position is obtained linearity and achieved the resolution of 0.7 mm FWHM for x-axis of DOI by readout WLSF. (authors)« less
An Extensive Unified Thermo-Electric Module Characterization Method
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-01-01
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios. PMID:27983575
Characterizing hydraulic conductivity with the direct-push permeameter
Butler, J.J.; Dietrich, P.; Wittig, V.; Christy, T.
2007-01-01
The direct-push permeameter (DPP) is a promising approach for obtaining high-resolution information about vertical variations in hydraulic conductivity (K) in shallow unconsolidated settings. This small-diameter tool, which consists of a short screened section with a pair of transducers inset in the tool near the screen, is pushed into the subsurface to a depth at which a K estimate is desired. A short hydraulic test is then performed by injecting water through the screen at a constant rate (less than 4 L/min) while pressure changes are monitored at the transducer locations. Hydraulic conductivity is calculated using the injection rate and the pressure changes in simple expressions based on Darcy's Law. In units of moderate or higher hydraulic conductivity (more than 1 m/d), testing at a single level can be completed within 10 to 15 min. Two major advantages of the method are its speed and the insensitivity of the K estimates to the zone of compaction created by tool advancement. The potential of the approach has been assessed at two extensively studied sites in the United States and Germany over a K range commonly faced in practical field investigations (0.02 to 500 m/d). The results of this assessment demonstrate that the DPP can provide high-resolution K estimates that are in good agreement with estimates obtained through other means. ?? 2007 National Ground Water Association.
Vupparaboina, Kiran Kumar; Nizampatnam, Srinath; Chhablani, Jay; Richhariya, Ashutosh; Jana, Soumya
2015-12-01
A variety of vision ailments are indicated by anomalies in the choroid layer of the posterior visual section. Consequently, choroidal thickness and volume measurements, usually performed by experts based on optical coherence tomography (OCT) images, have assumed diagnostic significance. Now, to save precious expert time, it has become imperative to develop automated methods. To this end, one requires choroid outer boundary (COB) detection as a crucial step, where difficulty arises as the COB divides the choroidal granularity and the scleral uniformity only notionally, without marked brightness variation. In this backdrop, we measure the structural dissimilarity between choroid and sclera by structural similarity (SSIM) index, and hence estimate the COB by thresholding. Subsequently, smooth COB estimates, mimicking manual delineation, are obtained using tensor voting. On five datasets, each consisting of 97 adult OCT B-scans, automated and manual segmentation results agree visually. We also demonstrate close statistical match (greater than 99.6% correlation) between choroidal thickness distributions obtained algorithmically and manually. Further, quantitative superiority of our method is established over existing results by respective factors of 27.67% and 76.04% in two quotient measures defined relative to observer repeatability. Finally, automated choroidal volume estimation, being attempted for the first time, also yields results in close agreement with that of manual methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Community BMI Surveillance Using an Existing Immunization Registry in San Diego, California.
Ratigan, Amanda R; Lindsay, Suzanne; Lemus, Hector; Chambers, Christina D; Anderson, Cheryl A M; Cronan, Terry A; Browner, Deirdre K; Wooten, Wilma J
2017-06-01
This study examines the demographic representativeness of the County of San Diego Body Mass Index (BMI) Surveillance System to determine if the BMI estimates being obtained from this convenience sample of individuals who visited their healthcare provider for outpatient services can be generalized to the general population of San Diego. Height and weight were transmitted from electronic health records systems to the San Diego Immunization Registry (SDIR). Age, gender, and race/ethnicity of this sample are compared to general population estimates by sub-regional area (SRA) (n = 41) to account for regional demographic differences. A < 10% difference (calculated as the ratio of the differences between the frequencies of a sub-group in this sample and general population estimates obtained from the U.S. Census Bureau) was used to determine representativeness. In 2011, the sample consisted of 352,924 residents aged 2-100 years. The younger age groups (2-11, 12-17 years) and the oldest age group (≥65 years) were representative in 90, 75, and 85% of SRAs, respectively. Furthermore, at least one of the five racial/ethnic groups was represented in 71% of SRAs. This BMI Surveillance System was found to demographically represent some SRAs well, suggesting that this registry-based surveillance system may be useful in estimating and monitoring neighborhood-level BMI data.
An Extensive Unified Thermo-Electric Module Characterization Method.
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-12-13
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios.
Mollenhauer, Robert; Brewer, Shannon K.
2017-01-01
Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.
Three-dimensional modeling, estimation, and fault diagnosis of spacecraft air contaminants.
Narayan, A P; Ramirez, W F
1998-01-01
A description is given of the design and implementation of a method to track the presence of air contaminants aboard a spacecraft using an accurate physical model and of a procedure that would raise alarms when certain tolerance levels are exceeded. Because our objective is to monitor the contaminants in real time, we make use of a state estimation procedure that filters measurements from a sensor system and arrives at an optimal estimate of the state of the system. The model essentially consists of a convection-diffusion equation in three dimensions, solved implicitly using the principle of operator splitting, and uses a flowfield obtained by the solution of the Navier-Stokes equations for the cabin geometry, assuming steady-state conditions. A novel implicit Kalman filter has been used for fault detection, a procedure that is an efficient way to track the state of the system and that uses the sparse nature of the state transition matrices.
Classifying unresolved objects from simulated space data.
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Hyde, P. D.
1973-01-01
A multispectral scanner data set gathered at a flight altitude of 10,000 ft. over an agricultural area was modified to simulate the spatial resolution of the spacecraft scanners. Signatures were obtained for several major crops and their proportions were estimated over a large area. For each crop, a map was generated to show its approximate proportion in each resolution element, and hence its distribution over the area of interest. A statistical criterion was developed to identify data points that may not represent a mixture of the specified crops. This allows for great reduction in the effect of unknown or alien objects on the estimated proportions. This criterion can be used to locate special features, such as roads or farm houses. Preliminary analysis indicates a high level of consistency between estimated proportions and available ground truth. Large concentrations of major crops show up especially well on the maps.
NASA Technical Reports Server (NTRS)
Didwall, E. M.
1981-01-01
Low latitude magnetic field variations (magnetic storms) caused by large fluctuations in the equatorial ring current were derived from magnetic field magnitude data obtained by OGO 2, 4, and 6 satellites over an almost 5 year period. Analysis procedures consisted of (1) separating the disturbance field into internal and external parts relative to the surface of the Earth; (2) estimating the response function which related to the internally generated magnetic field variations to the external variations due to the ring current; and (3) interpreting the estimated response function using theoretical response functions for known conductivity profiles. Special consideration is given to possible ocean effects. A temperature profile is proposed using conductivity temperature data for single crystal olivine. The resulting temperature profile is reasonable for depths below 150-200 km, but is too high for shallower depths. Apparently, conductivity is not controlled solely by olivine at shallow depths.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
The estimation of material and patch parameters in a PDE-based circular plate model
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.
1995-01-01
The estimation of material and patch parameters for a system involving a circular plate, to which piezoceramic patches are bonded, is considered. A partial differential equation (PDE) model for the thin circular plate is used with the passive and active contributions form the patches included in the internal and external bending moments. This model contains piecewise constant parameters describing the density, flexural rigidity, Poisson ratio, and Kelvin-Voigt damping for the system as well as patch constants and a coefficient for viscous air damping. Examples demonstrating the estimation of these parameters with experimental acceleration data and a variety of inputs to the experimental plate are presented. By using a physically-derived PDE model to describe the system, parameter sets consistent across experiments are obtained, even when phenomena such as damping due to electric circuits affect the system dynamics.
Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network
NASA Astrophysics Data System (ADS)
Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea
Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.
Abundance models improve spatial and temporal prioritization of conservation resources.
Johnston, Alison; Fink, Daniel; Reynolds, Mark D; Hochachka, Wesley M; Sullivan, Brian L; Bruns, Nicholas E; Hallstein, Eric; Merrifield, Matt S; Matsumoto, Sandi; Kelling, Steve
2015-10-01
Conservation prioritization requires knowledge about organism distribution and density. This information is often inferred from models that estimate the probability of species occurrence rather than from models that estimate species abundance, because abundance data are harder to obtain and model. However, occurrence and abundance may not display similar patterns and therefore development of robust, scalable, abundance models is critical to ensuring that scarce conservation resources are applied where they can have the greatest benefits. Motivated by a dynamic land conservation program, we develop and assess a general method for modeling relative abundance using citizen science monitoring data. Weekly estimates of relative abundance and occurrence were compared for prioritizing times and locations of conservation actions for migratory waterbird species in California, USA. We found that abundance estimates consistently provided better rankings of observed counts than occurrence estimates. Additionally, the relationship between abundance and occurrence was nonlinear and varied by species and season. Across species, locations prioritized by occurrence models had only 10-58% overlap with locations prioritized by abundance models, highlighting that occurrence models will not typically identify the locations of highest abundance that are vital for conservation of populations.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision
Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz
2016-01-01
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424
NASA Astrophysics Data System (ADS)
Nitz, D. E.; Curry, J. J.; Buuck, M.; DeMann, A.; Mitchell, N.; Shull, W.
2018-02-01
We report radiative transition probabilities for 5029 emission lines of neutral cerium within the wavelength range 417-1110 nm. Transition probabilities for only 4% of these lines have been previously measured. These results are obtained from a Boltzmann analysis of two high resolution Fourier transform emission spectra used in previous studies of cerium, obtained from the digital archives of the National Solar Observatory at Kitt Peak. The set of transition probabilities used for the Boltzmann analysis are those published by Lawler et al (2010 J. Phys. B: At. Mol. Opt. Phys. 43 085701). Comparisons of branching ratios and transition probabilities for lines common to the two spectra provide important self-consistency checks and test for the presence of self-absorption effects. Estimated 1σ uncertainties for our transition probability results range from 10% to 18%.
Functional stability of cerebral circulatory system
NASA Technical Reports Server (NTRS)
Moskalenko, Y. Y.
1980-01-01
The functional stability of the cerebral circulation system seems to be based on the active mechanisms and on those stemming from specific of the biophysical structure of the system under study. This latter parameter has some relevant criteria for its quantitative estimation. The data obtained suggest that the essential part of the mechanism for active responses of cerebral vessels which maintains the functional stability of this portion of the vascular system, consists of a neurogenic component involving central nervous structures localized, for instance, in the medulla oblongata.
NASA Astrophysics Data System (ADS)
Le Gall, C.; Geiger, E.; Gallais-During, A.; Pontillon, Y.; Lamontagne, J.; Hanus, E.; Ducros, G.
2017-11-01
Qualitative and quantitative analyses on the VERDON-1 sample made it possible to obtain valuable information on fission product behaviour in the fuel during the test. A promising methodology based on the quantitative results of post-test characterisations has been implemented to assess the release fraction of non γ-emitter fission products. The order of magnitude of the estimated release fractions for each fission product was consistent with their class of volatility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, T.S.; Ikalainen, P.K.
1995-12-31
The authors report a two-temperature RF bias stress test on nominal 1.2 W 7.5 GHz GaAs/AlGaAs HBT unit cell amplifiers. MTTF`s of 2020 and 1340 hours were obtained at Tj = 218{degrees}C and 245{degrees}C, respectively, under nominal input bias. An activation energy of 0.42 eV is estimated, consistent with published results for similar devices under DC bias stress.
Transistor-like behavior of single metalloprotein junctions.
Artés, Juan M; Díez-Pérez, Ismael; Gorostiza, Pau
2012-06-13
Single protein junctions consisting of azurin bridged between a gold substrate and the probe of an electrochemical tunneling microscope (ECSTM) have been obtained by two independent methods that allowed statistical analysis over a large number of measured junctions. Conductance measurements yield (7.3 ± 1.5) × 10(-6)G(0) in agreement with reported estimates using other techniques. Redox gating of the protein with an on/off ratio of 20 was demonstrated and constitutes a proof-of-principle of a single redox protein field-effect transistor.
Charge distribution and transport properties in reduced ceria phases: A review
NASA Astrophysics Data System (ADS)
Shoko, E.; Smith, M. F.; McKenzie, Ross H.
2011-12-01
The question of the charge distribution in reduced ceria phases (CeO2-x) is important for understanding the microscopic physics of oxygen storage capacity, and the electronic and ionic conductivities in these materials. All these are key properties in the application of these materials in catalysis and electrochemical devices. Several approaches have been applied to study this problem, including ab initio methods. Recently [1], we applied the bond valence model (BVM) to discuss the charge distribution in several different crystallographic phases of reduced ceria. Here, we compare the BVM results to those from atomistic simulations to determine if there is consistency in the predictions of the two approaches. Our analysis shows that the two methods give a consistent picture of the charge distribution around oxygen vacancies in bulk reduced ceria phases. We then review the transport theory applicable to reduced ceria phases, providing useful relationships which enable comparison of experimental results obtained by different techniques. In particular, we compare transport parameters obtained from the observed optical absorption spectrum, α(ω), dc electrical conductivity with those predicted by small polaron theory and the Harrison method. The small polaron energy is comparable to that estimated from α(ω). However, we found a discrepancy between the value of the electron hopping matrix element, t, estimated from the Marcus-Hush formula and that obtained by the Harrison method. Part of this discrepancy could be attributed to the system lying in the crossover region between adiabatic and nonadiabatic whereas our calculations assumed the system to be nonadiabatic. Finally, by considering the relationship between the charge distribution and electronic conductivity, we suggest the possibility of low temperature metallic conductivity for intermediate phases, i.e., x˜0.3. This has not yet been experimentally observed.
A complex valued radial basis function network for equalization of fast time varying channels.
Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R
1999-01-01
This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.
NASA Astrophysics Data System (ADS)
Vergino, Eileen S.
Soviet seismologists have published descriptions of 96 nuclear explosions conducted from 1961 through 1972 at the Semipalatinsk test site, in Kazakhstan, central Asia [Bocharov et al., 1989]. With the exception of releasing news about some of their peaceful nuclear explosions (PNEs) the Soviets have never before published such a body of information.To estimate the seismic yield of a nuclear explosion it is necessary to obtain a calibrated magnitude-yield relationship based on events with known yields and with a consistent set of seismic magnitudes. U.S. estimation of Soviet test yields has been done through application of relationships to the Soviet sites based on the U.S. experience at the Nevada Test Site (NTS), making some correction for differences due to attenuation and near-source coupling of seismic waves.
The measurement of bacterial translation by photon correlation spectroscopy.
Stock, G B; Jenkins, T C
1978-01-01
Photon correlation spectroscopy is shown to be a practical technique for the accurate determination of translational speeds of bacteria. Though other attempts have been made to use light scattering as a probe of various aspects of bacterial motility, no other comprehensive studies to establish firmly the basic capabilities and limitations of the technique have been published. The intrinsic accuracy of the assay of translational speeds by photon correlation spectroscopy is investigated by analysis of synthetic autocorrelation data; consistently accurate estimates of the mean and second moment of the speed distribution can be calculated. Extensive analyses of experimental preparations of Salmonella typhimurium examine the possible sources of experimental difficulty with the assay. Cinematography confirms the bacterial speed estimates obtained by photon correlation techniques. PMID:346073
NASA Astrophysics Data System (ADS)
Lin, Pei-Sheng; Rosset, Denis; Zhang, Yanbao; Bancal, Jean-Daniel; Liang, Yeong-Cherng
2018-03-01
The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a finite number of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable, and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of any Bell inequality tailored for the specific property and the specific distribution of interest.
Cruff, R.W.; Thompson, T.H.
1967-01-01
This study compared potential evapotranspiration, computed from climatological data by each of six empirical methods, with pan evaporation adjusted to equivalent lake evaporation by regional coefficients. The six methods tested were the Thornthwaite, U.S. Weather Bureau (a modification of the Permian method), Lowry-Johnson, Blaney-Criddle, Lane, and Hamon methods. The test was limited to 25 sites in the arid and subhumid parts of Arizona, California, and Nevada, where pan evaporation and concurrent climatological data were available. However, some of the sites lacked complete climatological data for the application of all six methods. Average values of adjusted pan evaporation and computed potential evapotransp4ration were compared for two periods---the calendar year and the 6-month period from May 1 through October 31. The 25 sites sampled a wide range of climatic conditions. Ten sites (group 1) were in a highly arid environment and four (group 2) were in an arid environment that was modified by extensive irrigation. The remaining 11 sites (group 3) were in a subhumid environment. Only the Weather Bureau method gave estimates of potential evapotranspiration that closely agreed with the adjusted pan evaporation at all sites where the method was used. However, lack of climatological data restricted the use of the Weather Bureau method to seven sites. Results obtained by use of the Thornthwaite, Lowry-Johnson, and Hamon methods were consistently low. Results obtained by use of the Lane method agreed with adjusted pan evaporation at the group 1 sites but were consistently high at the group 2 and 3 sites. During the analysis it became apparent that adjusted pan evaporation in an arid environment (group 1 sites) was a spurious standard for evaluating the reliability of .the methods that were tested. Group 1 data were accordingly not considered when making conclusions as ,to which of the six methods tested was best. The results of this study for group 2 and 3 data indicated that the Blaney-Criddle method, which uses climatological data that can be readily obtained or deduced, was the most practical of the six methods for estimating potential evapotranspiration. At all 15 sites in the two environments, potential evapotranspiration computed by the Blaney-Criddle method checked the adjusted pan evaporation within ?22 percent. This percentage range is generally considered to be the range of reliability for estimating lake evaporation from evaporation pans.
Near-field Oblique Remote Sensing of Stream Water-surface Elevation, Slope, and Surface Velocity
NASA Astrophysics Data System (ADS)
Minear, J. T.; Kinzel, P. J.; Nelson, J. M.; McDonald, R.; Wright, S. A.
2014-12-01
A major challenge for estimating discharges during flood events or in steep channels is the difficulty and hazard inherent in obtaining in-stream measurements. One possible solution is to use near-field remote sensing to obtain simultaneous water-surface elevations, slope, and surface velocities. In this test case, we utilized Terrestrial Laser Scanning (TLS) to remotely measure water-surface elevations and slope in combination with surface velocities estimated from particle image velocimetry (PIV) obtained by video-camera and/or infrared camera. We tested this method at several sites in New Mexico and Colorado using independent validation data consisting of in-channel measurements from survey-grade GPS and Acoustic Doppler Current Profiler (ADCP) instruments. Preliminary results indicate that for relatively turbid or steep streams, TLS collects tens of thousands of water-surface elevations and slopes in minutes, much faster than conventional means and at relatively high precision, at least as good as continuous survey-grade GPS measurements. Estimated surface velocities from this technique are within 15% of measured velocity magnitudes and within 10 degrees from the measured velocity direction (using extrapolation from the shallowest bin of the ADCP measurements). Accurately aligning the PIV results into Cartesian coordinates appears to be one of the main sources of error, primarily due to the sensitivity at these shallow oblique look angles and the low numbers of stationary objects for rectification. Combining remotely-sensed water-surface elevations, slope, and surface velocities produces simultaneous velocity measurements from a large number of locations in the channel and is more spatially extensive than traditional velocity measurements. These factors make this technique useful for improving estimates of flow measurements during flood flows and in steep channels while also decreasing the difficulty and hazard associated with making measurements in these conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geslot, Benoit; Pepino, Alexandra; Blaise, Patrick
A pile noise measurement campaign has been conducted by the CEA in the VENUS-F reactor (SCK-CEN, Mol Belgium) in April 2011 in the reference critical configuration of the GUINEVERE experimental program. The experimental setup made it possible to estimate the core kinetic parameters: the prompt neutron decay constant, the delayed neutron fraction and the generation time. A precise assessment of these constants is of prime importance. In particular, the effective delayed neutron fraction is used to normalize and compare calculated reactivities of different subcritical configurations, obtained by modifying either the core layout or the control rods position, with experimental onesmore » deduced from the analysis of measurements. This paper presents results obtained with a CEA-developed time stamping acquisition system. Data were analyzed using Rossi-α and Feynman-α methods. Results were normalized to reactor power using a calibrated fission chamber with a deposit of Np-237. Calculated factors were necessary to the analysis: the Diven factor was computed by the ENEA (Italy) and the power calibration factor by the CNRS/IN2P3/LPC Caen. Results deduced with both methods are consistent with respect to calculated quantities. Recommended values are given by the Rossi-α estimator, that was found to be the most robust. The neutron generation time was found equal to 0.438 ± 0.009 μs and the effective delayed neutron fraction is 765 ± 8 pcm. Discrepancies with the calculated value (722 pcm, calculation from ENEA) are satisfactory: -5.6% for the Rossi-α estimate and -2.7% for the Feynman-α estimate. (authors)« less
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1991-01-01
Reliable estimates of the components of the surface radiation budget are important in studies of ocean-atmosphere interaction, land-atmosphere interaction, ocean circulation and in the validation of radiation schemes used in climate models. The methods currently under consideration must necessarily make certain assumptions regarding both the presence of clouds and their vertical extent. Because of the uncertainties in assumed cloudiness, all these methods involve perhaps unacceptable uncertainties. Here, a theoretical framework that avoids the explicit computation of cloud fraction and the location of cloud base in estimating the surface longwave radiation is presented. Estimates of the global surface downward fluxes and the oceanic surface net upward fluxes were made for four months (April, July, October and January) in 1985 to 1986. These estimates are based on a relationship between cloud radiative forcing at the top of the atmosphere and the surface obtained from a general circulation model. The radiation code is the version used in the UCLA/GLA general circulation model (GCM). The longwave cloud radiative forcing at the top of the atmosphere as obtained from Earth Radiation Budget Experiment (ERBE) measurements is used to compute the forcing at the surface by means of the GCM-derived relationship. This, along with clear-sky fluxes from the computations, yield maps of the downward longwave fluxes and net upward longwave fluxes at the surface. The calculated results are discussed and analyzed. The results are consistent with current meteorological knowledge and explainable on the basis of previous theoretical and observational works; therefore, it can be concluded that this method is applicable as one of the ways to obtain the surface longwave radiation fields from currently available satellite data.
Irurita Olivares, Javier; Alemán Aguilera, Inmaculada
2016-11-01
Sex estimation of juveniles in the Physical and Forensic Anthropology context is currently a task with serious difficulties because the discriminatory bone characteristics are minimal until puberty. Also, the small number of osteological collections of children available for research has made it difficult to develop effective methodologies in this regard. This study tested the characteristics of the ilium and jaw proposed by Schutkowski in 1993 for estimation of sex in subadults. The study sample consisted of 109 boys and 76 girls, ranging in age from 5 months of gestation to 6 years, from the identified osteological collection of Granada (Spain). For the analysis and interpretation of the results, we have proposed changes from previous studies because we believe they raised methodological errors relating to the calculation of probabilities of success and sex distribution in the sample. The results showed correct assignment probabilities much lower than those obtained by Schutkowski as well as by other authors. The best results were obtained with the angle and depth of the sciatic notch, with 0.73 and 0.80 probability of correct assignment respectively if the male trait was observed. The results obtained with the other criteria were too small to be valid in the context of Physical or Forensic Anthropology. From our results, we concluded that Schutkowski method should not be used in forensic context, and that the sciatic notch is the most dimorphic trait in subadults and, therefore, the most appropriate to develop more effective methods for estimating sex.
NASA Astrophysics Data System (ADS)
Dalkilic, Turkan Erbay; Apaydin, Aysen
2009-11-01
In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.
Motion compensation for cone-beam CT using Fourier consistency conditions
NASA Astrophysics Data System (ADS)
Berger, M.; Xia, Y.; Aichinger, W.; Mentl, K.; Unberath, M.; Aichert, A.; Riess, C.; Hornegger, J.; Fahrig, R.; Maier, A.
2017-09-01
In cone-beam CT, involuntary patient motion and inaccurate or irreproducible scanner motion substantially degrades image quality. To avoid artifacts this motion needs to be estimated and compensated during image reconstruction. In previous work we showed that Fourier consistency conditions (FCC) can be used in fan-beam CT to estimate motion in the sinogram domain. This work extends the FCC to 3\\text{D} cone-beam CT. We derive an efficient cost function to compensate for 3\\text{D} motion using 2\\text{D} detector translations. The extended FCC method have been tested with five translational motion patterns, using a challenging numerical phantom. We evaluated the root-mean-square-error and the structural-similarity-index between motion corrected and motion-free reconstructions. Additionally, we computed the mean-absolute-difference (MAD) between the estimated and the ground-truth motion. The practical applicability of the method is demonstrated by application to respiratory motion estimation in rotational angiography, but also to motion correction for weight-bearing imaging of knees. Where the latter makes use of a specifically modified FCC version which is robust to axial truncation. The results show a great reduction of motion artifacts. Accurate estimation results were achieved with a maximum MAD value of 708 μm and 1184 μm for motion along the vertical and horizontal detector direction, respectively. The image quality of reconstructions obtained with the proposed method is close to that of motion corrected reconstructions based on the ground-truth motion. Simulations using noise-free and noisy data demonstrate that FCC are robust to noise. Even high-frequency motion was accurately estimated leading to a considerable reduction of streaking artifacts. The method is purely image-based and therefore independent of any auxiliary data.
Intermediate energy proton-deuteron elastic scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1973-01-01
A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.
Mixture modelling for cluster analysis.
McLachlan, G J; Chang, S U
2004-10-01
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
Plasma Diffusion in Self-Consistent Fluctuations
NASA Technical Reports Server (NTRS)
Smets, R.; Belmont, G.; Aunai, N.
2012-01-01
The problem of particle diffusion in position space, as a consequence ofeleclromagnetic fluctuations is addressed. Numerical results obtained with a self-consistent hybrid code are presented, and a method to calculate diffusion coefficient in the direction perpendicular to the mean magnetic field is proposed. The diffusion is estimated for two different types of fluctuations. The first type (resuiting from an agyrotropic in itiai setting)is stationary, wide band white noise, and associated to Gaussian probability distribution function for the magnetic fluctuations. The second type (result ing from a Kelvin-Helmholtz instability) is non-stationary, with a power-law spectrum, and a non-Gaussian probabi lity distribution function. The results of the study allow revisiting the question of loading particles of solar wind origin in the Earth magnetosphere.
NASA Astrophysics Data System (ADS)
Murray, J. R.; Svarc, J. L.
2016-12-01
Constant secular velocities estimated from Global Positioning System (GPS)-derived position time series are a central input for modeling interseismic deformation in seismically active regions. Both postseismic motion and temporally correlated noise produce long-period signals that are difficult to separate from secular motion and can bias velocity estimates. For GPS sites installed post-earthquake it is especially challenging to uniquely estimate velocities and postseismic signals and to determine when the postseismic transient has decayed sufficiently to enable use of subsequent data for estimating secular rates. Within 60 km of the 2003 M6.5 San Simeon and 2004 M6 Parkfield earthquakes in California, 16 continuous GPS sites (group 1) were established prior to mid-2001, and 52 stations (group 2) were installed following the events. We use group 1 data to investigate how early in the post-earthquake time period one may reliably begin using group 2 data to estimate velocities. For each group 1 time series, we obtain eight velocity estimates using observation time windows with successively later start dates (2006 - 2013) and a parameterization that includes constant velocity, annual, and semi-annual terms but no postseismic decay. We compare these to velocities estimated using only pre-San Simeon data to find when the pre- and post-earthquake velocities match within uncertainties. To obtain realistic velocity uncertainties, for each time series we optimize a temporally correlated noise model consisting of white, flicker, random walk, and, in some cases, band-pass filtered noise contributions. Preliminary results suggest velocities can be reliably estimated using data from 2011 to the present. Ongoing work will assess velocity bias as a function of epicentral distance and length of post-earthquake time series as well as explore spatio-temporal filtering of detrended group 1 time series to provide empirical corrections for postseismic motion in group 2 time series.
Molecular Dynamics Simulation of the Antiamoebin Ion Channel: Linking Structure and Conductance
NASA Technical Reports Server (NTRS)
Wilson, Michael A.; Wei, Chenyu; Bjelkmar, Paer; Wallace, B. A.; Pohorille, Andrew
2011-01-01
Molecular dynamics simulations were carried out in order to ascertain which of the potential multimeric forms of the transmembrane peptaibol channel, antiamoebin, is consistant with its measured conductance. Estimates of the conductance obtained through counting ions that cross the channel and by solving the Nernst-Planck equation yield consistent results, indicating that the motion of ions inside the channel can be satisfactorily described as diffusive.The calculated conductance of octameric channels is markedly higher than the conductance measured in single channel recordings, whereas the tetramer appears to be non-conducting. The conductance of the hexamer was estimated to be 115+/-34 pS and 74+/-20 pS, at 150 mV and 75 mV, respectively, in satisfactory agreement with the value of 90 pS measured at 75 mV. On this basis we propose that the antiamoebin channel consists of six monomers. Its pore is large enough to accommodate K(+) and Cl(-) with their first solvation shells intact. The free energy barrier encountered by K(+) is only 2.2 kcal/mol whereas Cl(-) encounters a substantially higher barrier of nearly 5 kcal/mol. This difference makes the channel selective for cations. Ion crossing events are shown to be uncorrelated and follow Poisson statistics. keywords: ion channels, peptaibols, channel conductance, molecular dynamics
Validity test and its consistency in the construction of patient loyalty model
NASA Astrophysics Data System (ADS)
Yanuar, Ferra
2016-04-01
The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.
Estimation of the biserial correlation and its sampling variance for use in meta-analysis.
Jacobs, Perke; Viechtbauer, Wolfgang
2017-06-01
Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Dardner, B. R.; Blad, B. L.; Thompson, D. R.; Henderson, K. E.
1985-01-01
Reflectance and agronomic Thematic Mapper (TM) data were analyzed to determine possible data transformations for evaluating several plant parameters of corn. Three transformation forms were used: the ratio of two TM bands, logarithms of two-band ratios, and normalized differences of two bands. Normalized differences and logarithms of two-band ratios responsed similarly in the equations for estimating the plant growth parameters evaluated in this study. Two-term equations were required to obtain the maximum predictability of percent ground cover, canopy moisture content, and total wet phytomass. Standard error of estimate values were 15-26 percent lower for two-term estimates of these parameters than for one-term estimates. The terms log(TM4/TM2) and (TM4/TM5) produced the maximum predictability for leaf area and dry green leaf weight, respectively. The middle infrared bands TM5 and TM7 are essential for maximizing predictability for all measured plant parameters except leaf area index. The estimating models were evaluated over bare soil to discriminate between equations which are statistically similar. Qualitative interpretations of the resulting prediction equations are consistent with general agronomic and remote sensing theory.
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
Wang, Xiao Yu; Guo, Yuming; FitzGerald, Gerry; Aitken, Peter; Tippett, Vivienne; Chen, Dong; Wang, Xiaoming; Tong, Shilu
2015-01-01
Different locations and study periods were used in the assessment of the relationships between heatwaves and mortality. However, little is known about the comparability and consistency of the previous effect estimates in the literature. This study assessed the heatwave-mortality relationship using different study periods in the three largest Australian cities (Brisbane, Melbourne and Sydney). Daily data on climatic variables and mortality for the three cities were obtained from relevant government agencies between 1988 and 2011. A consistent definition of heatwaves was used for these cities. Poisson generalised additive model was fitted to assess the impact of heatwaves on mortality. Non-accidental and circulatory mortality significantly increased during heatwaves across the three cities even with different heatwave definitions and study periods. Using the summer data resulted in the largest increase in effect estimates compared to those using the warm season or the whole year data. The findings may have implications for developing standard approaches to evaluating the heatwave-mortality relationship and advancing heat health warning systems. It also provides an impetus to methodological advance for assessing climate change-related health consequences.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Images in the rocket ultraviolet - Young clusters in H II regions of M83
NASA Technical Reports Server (NTRS)
Bohlin, Ralph C.; Cornett, Robert H.; Hill, Jesse K.; Stecher, Theodore P.
1990-01-01
UV images of M83 at 1540 and 2360 A reveal 18 compact sources that are associated with H II regions. E(B - V) values were estimated individually from the observed UV and optical colors and the Galactic UV extinction curve, using theoretical flux distributions. The dereddened colors are consistent with ages up to 3 x 10 to the 6th yr. A maximum possible age of 6.5 x 10 to the 6th yr is obtained assuming foreground reddening only. The distribution of observed colors is consistent with the Galactic reddening curve but not with enhanced far-UV extinction, as in the LMC 30 Dor curve. The H-alpha fluxes suggest either that dust within the H II regions absorbs up to 70 percent of the Lyman continuum radiation or that a similar fraction of the H-alpha flux is below the surface brightness detection limit. Cluster mass estimates depend on the range of stellar masses present but are probably in the range 10,000-100,000 solar masses.
Luczak, Susan E; Rosen, I Gary
2014-08-01
Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.
2013-01-01
Background Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Methods Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons’ camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to “gold standard” reference population figures from census or other robust methods. Results Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of <10% in four sites and 10–30% in three sites, but severely over-estimated the population in an Ethiopian camp with implausible occupancy data and two post-earthquake Haiti sites featuring dense and complex residential layout. For each site, estimates were produced in 2–5 working person-days. Conclusions In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method’s development. PMID:23343099
Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris
2013-01-23
Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of <10% in four sites and 10-30% in three sites, but severely over-estimated the population in an Ethiopian camp with implausible occupancy data and two post-earthquake Haiti sites featuring dense and complex residential layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2004-01-01
This study presents results from a multi-satellite/multi-sensor retrieval system designed to obtain the atmospheric water budget over the open ocean. A combination of 3ourly-sampled monthly datasets derived from the GOES-8 5-channel Imager, the TRMM TMI radiometer, and the DMSP 7-channel passive microwave radiometers (SSM/I) have been acquired for the combined Gulf of Mexico-Caribbean Sea basin. Whereas the methodology has been tested over this basin, the retrieval system is designed for portability to any open-ocean region. Algorithm modules using the different datasets to retrieve individual geophysical parameters needed in the water budget equation are designed in a manner that takes advantage of the high temporal resolution of the GOES-8 measurements, as well as the physical relationships inherent to the TRMM and SSM/I passive microwave measurements in conjunction with water vapor, cloud liquid water, and rainfall. The methodology consists of retrieving the precipitation, surface evaporation, and vapor-cloud water storage terms in the atmospheric water balance equation from satellite techniques, with the water vapor advection term being obtained as the residue needed for balance. Thus, the intent is to develop a purely satellite-based method for obtaining the full set of terms in the atmospheric water budget equation without requiring in situ sounding information on the wind profile. The algorithm is validated by cross-checking all the algorithm components through multiple- algorithm retrieval intercomparisons. A further check on the validation is obtained by directly comparing water vapor transports into the targeted basin diagnosed from the satellite algorithms to those obtained observationally from a network of land-based upper air stations that nearly uniformly surround the basin, although it is fair to say that these checks are more effective m identifying problems in estimating vapor transports from a leaky operational radiosonde network than in verifying the transport estimates determined from the satellite algorithm system Total columnar atmospheric water budget results are presented for an extended annual cycle consisting of the months of October-97, January-98, April-98, July-98,October-98, and January 1999. These results are used to emphasize the changing relationship in E-P, as well as in the varying roles of storage and advection in balancing E-P both on daily and monthly time scales and on localized and basin space scales. Results from the algorithm-to-algorithm intercomparisons are also presented in the context of sensitivity testing to help understand the intrinsic uncertainties in evaluating the water budget terms by an all-satellite algorithm approach.
Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf
2016-06-30
In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.
Carson, Cantwell G; Levine, Jonathan S
2016-09-01
The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Olsson, Martin A.; Söderhjelm, Pär
2016-01-01
In this article, the convergence of quantum mechanical (QM) free‐energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa‐acid deep‐cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158–224 atoms). We use single‐step exponential averaging (ssEA) and the non‐Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi‐empirical PM6‐DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free‐energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27117350
Calibrated Tully-fisher Relations For Improved Photometric Estimates Of Disk Rotation Velocities
NASA Astrophysics Data System (ADS)
Reyes, Reinabelle; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.
2011-01-01
We present calibrated scaling relations (also referred to as Tully-Fisher relations or TFRs) between rotation velocity and photometric quantities-- absolute magnitude, stellar mass, and synthetic magnitude (a linear combination of absolute magnitude and color)-- of disk galaxies at z 0.1. First, we selected a parent disk sample of 170,000 galaxies from SDSS DR7, with redshifts between 0.02 and 0.10 and r band absolute magnitudes between -18.0 and -22.5. Then, we constructed a child disk sample of 189 galaxies that span the parameter space-- in absolute magnitude, color, and disk size-- covered by the parent sample, and for which we have obtained kinematic data. Long-slit spectroscopy were obtained from the Dual Imaging Spectrograph (DIS) at the Apache Point Observatory 3.5 m for 99 galaxies, and from Pizagno et al. (2007) for 95 galaxies (five have repeat observations). We find the best photometric estimator of disk rotation velocity to be a synthetic magnitude with a color correction that is consistent with the Bell et al. (2003) color-based stellar mass ratio. The improved rotation velocity estimates have a wide range of scientific applications, and in particular, in combination with weak lensing measurements, they enable us to constrain the ratio of optical-to-virial velocity in disk galaxies.
Passive characterization of hydrofracture properties using signals from the hydraulic pumps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rector, J.W. III; Dong, Qichen
1995-12-31
In this study we utilize conical shear wave arrivals recorded in geophone observation wells to characterize a hydrofracture performed in the South Belridge Diatomite oil field. The conical wave arrivals are initially created by the hydraulic pumps on the surface, which send tube waves down the treatment borehole. Since the tube wave velocity in the Diatomite is greater than the shear formation velocity (the shear velocity in the diatomite is about 2,200 ft/s) cortical shear waves are radiated into the formation by the tube waves traveling down the treatment borehole. We use the decrease in amplitude of the tube wavemore » as it passes through the fracture zone to image changes in hydraulic conductivity of the fracture. By combining this information with estimates of the fracture height we obtain estimates of fracture width changes over time using the model of Tang and Cheng (1993). We find an excellent qualitative agreement between tube wave attenuation and pump pressure over time. Fracture widths estimated from the Tang and Cheng model appear to be consistent with the volume of injected fluid and the known length of the hydrofracture. Provided a monitor well can be instrumented, this technique holds potential for obtaining a relatively inexpensive real-time characterization of hydrofracs.« less
Kawaguchi, A; Linde, L M; Imachi, T; Mizuno, H; Akutsu, H
1983-12-01
To estimate the left atrial volume (LAV) and pulmonary blood flow in patients with congenital heart disease (CHD), we employed two-dimensional echocardiography (TDE). The LAV was measured in dimensions other than those obtained in conventional M-mode echocardiography (M-mode echo). Mathematical and geometrical models for LAV calculation using the standard long-axis, short-axis and apical four-chamber planes were devised and found to be reliable in a preliminary study using porcine heart preparations, although length (10%), area (20%) and volume (38%) were significantly and consistently underestimated with echocardiography. Those models were then applied and correlated with angiocardiograms (ACG) in 25 consecutive patients with suspected CHD. In terms of the estimation of the absolute LAV, accuracy seemed commensurate with the number of the dimensions measured. The correlation between data obtained by TDE and ACG varied with changing hemodynamics such as cardiac cycle, absolute LAV and presence or absence of volume load. The left atrium was found to become spherical and progressively underestimated with TDE at ventricular endsystole, in larger LAV and with increased volume load. Since this tendency became less pronounced in measuring additional dimensions, reliable estimation of the absolute LAV and volume load was possible when 2 or 3 dimensions were measured. Among those calculation models depending on 2 or 3 dimensional measurements, there was only a small difference in terms of accuracy and predictability, although algorithm used varied from one model to another. This suggests that accurate cross-sectional area measurement is critically important for volume estimation rather than any particular algorithm involved. Cross-sectional area measurement by TDE integrated into a three dimensional equivalent allowed a reliable estimate of the LAV or volume load in a variety of hemodynamic situations where M-mode echo was not reliable.
Estimating the Value of Life, Injury, and Travel Time Saved Using a Stated Preference Framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2016-06-01
The incidence of fatality over the period 2010-2014 from automobile accidents in North Cyprus is 2.75 times greater than the average for the EU. With the prospect of North Cyprus entering the EU, many investments will need to be undertaken to improve road safety in order to reach EU benchmarks. The objective of this study is to provide local estimates of the value of a statistical life and injury along with the value of time savings. These are among the parameter values needed for the evaluation of the change in the expected incidence of automotive accidents and time savings brought about by such projects. In this study we conducted a stated choice experiment to identify the preferences and tradeoffs of automobile drivers in North Cyprus for improved travel times, travel costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers choose. These estimates were used to assess the individuals' willingness to pay (WTP) to avoid fatalities and injuries and to save travel time. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of injury (VI) prevented, and the value per hour of travel time saved. The estimates for the VSL range from €315,293 to €1,117,856 and the estimates of VI from € 5,603 to € 28,186. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cancer incidence attributable to inadequate physical activity in Alberta in 2012
Brenner, Darren R.; Poirier, Abbey E.; Grundy, Anne; Khandwala, Farah; McFadden, Alison; Friedenreich, Christine M.
2017-01-01
Background: Physical inactivity has been consistently associated with increased risk of colorectal, endometrial, breast (in postmenopausal women), prostate, lung and ovarian cancers. The objective of the current analysis was to estimate the proportion and absolute number of site-specific cancer cases attributable to inadequate physical activity in Alberta in 2012. Methods: We used population attributable risks to estimate the proportion of each site-specific cancer attributable to inactivity. Relative risk estimates were obtained from the epidemiological literature, and prevalence estimates were calculated with the use of data from the Canadian Community Health Survey cycle 2.1 (2003). Respondents who acquired 1.5-2.9 kcal/kg per day and less than 1.5 kcal/kg per day of physical activity were classified as moderately active and inactive, respectively, and both levels were considered inadequate for mitigating cancer risks. We obtained age-, sex- and site-specific cancer incidence data from the Alberta Cancer Registry for 2012. Results: About 59%-75% of men and 69%-78% of women did not engage in adequate physical activity. Overall, 13.8% of cancers across all associated cancers were estimated to be attributable to inadequate physical activity, representing 7.2% of all cancers diagnosed in Alberta in 2012. Suboptimal levels of physical activity had a greater impact among women: the proportion of all associated cancers attributable to inadequate physical activity was 18.3% for women and 9.9% for men. Interpretation: A substantial proportion of cancer cases diagnosed in Alberta were estimated to be attributable to inadequate physical activity. With the high prevalence of physical inactivity among adults in the province, developing strategies to increase physical activity levels could have a notable impact on reducing future cancer burden in Alberta. PMID:28468830
NASA Astrophysics Data System (ADS)
Lumme, E.; Pomoell, J.; Kilpua, E. K. J.
2017-12-01
Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.
Linden, Ariel; Yarnold, Paul R
2016-12-01
Program evaluations often utilize various matching approaches to emulate the randomization process for group assignment in experimental studies. Typically, the matching strategy is implemented, and then covariate balance is assessed before estimating treatment effects. This paper introduces a novel analytic framework utilizing a machine learning algorithm called optimal discriminant analysis (ODA) for assessing covariate balance and estimating treatment effects, once the matching strategy has been implemented. This framework holds several key advantages over the conventional approach: application to any variable metric and number of groups; insensitivity to skewed data or outliers; and use of accuracy measures applicable to all prognostic analyses. Moreover, ODA accepts analytic weights, thereby extending the methodology to any study design where weights are used for covariate adjustment or more precise (differential) outcome measurement. One-to-one matching on the propensity score was used as the matching strategy. Covariate balance was assessed using standardized difference in means (conventional approach) and measures of classification accuracy (ODA). Treatment effects were estimated using ordinary least squares regression and ODA. Using empirical data, ODA produced results highly consistent with those obtained via the conventional methodology for assessing covariate balance and estimating treatment effects. When ODA is combined with matching techniques within a treatment effects framework, the results are consistent with conventional approaches. However, given that it provides additional dimensions and robustness to the analysis versus what can currently be achieved using conventional approaches, ODA offers an appealing alternative. © 2016 John Wiley & Sons, Ltd.
Temperature and emissivity measurements at the sapphire single crystal fiber growth process
NASA Astrophysics Data System (ADS)
Bufetova, G. A.; Rusanov, S. Ya.; Seregin, V. F.; Pyrkov, Yu. N.; Tsvetkov, V. B.
2017-12-01
We present a new method for evaluation the absorption coefficient of the crystal melt around the phase transition zone for the spectral range of semitransparency. The emissivity distribution across the crystallization front of the sapphire crystal fiber was measured at the quasi-stationary laser heated pedestal growth (LHPG) process (Fejer et al., 1984; Feigelson, 1986) and the data for solid state, melt and phase transition zone (melt-solid interface) were obtained. The sapphire melt absorption coefficient was estimated to be 14 ± 2 cm-1 in the spectral range 1-1.4 μm around the melt point. It is consistent with data, obtained by different other methods. This method can be applied to determine the absorption coefficient for other materials.
NASA Astrophysics Data System (ADS)
Tsventoukh, M. M.
2018-05-01
A model has been developed for the explosive electron emission cell pulse of a vacuum discharge cathode spot that describes the ignition and extinction of the explosive pulse. The pulse is initiated due to hydrodynamic tearing of a liquid-metal jet which propagates from the preceding cell crater boundary and draws the ion current from the plasma produced by the preceding explosion. Once the jet neck has been resistively heated to a critical temperature (˜1 eV), the plasma starts expanding and decreasing in density, which corresponds to the extinction phase. Numerical and analytical solutions have been obtained that describe both the time behavior of the pulse plasma parameters and their average values. For the cell plasma, the momentum per transferred charge has been estimated to be some tens of g cm/(s C), which is consistent with the known measurements of ion velocity, ion erosion rate, and specific recoil force. This supports the model of the pressure-gradient-driven plasma acceleration mechanism for the explosive cathode spot cells. The ohmic electric field within the explosive current-carrying plasma has been estimated to be some tens of kV/cm, which is consistent with the known experimental data on cathode potential fall and explosive cell plasma size. This supports the model that assumes the ohmic nature of the cathode potential fall in a vacuum discharge.
Estimation of surface water storage in the Congo Basin
NASA Astrophysics Data System (ADS)
O'Loughlin, F.; Neal, J. C.; Schumann, G.; Beighley, E.; Bates, P. D.
2015-12-01
For many large river basins, especially in Africa, the lack of access to in-situ measurements, and the large areas involved, make modelling of water storage and runoff difficult. However, remote sensing datasets are useful alternative sources of information, which overcome these issues. In this study, we focus on the Congo Basin and, in particular, the cuvette central. Despite being the second largest river basin on earth and containing a large percentage of the world's tropical wetlands and forest, little is known about this basin's hydrology. Combining discharge estimates from in-situ measurements and outputs from a hydrological model, we build the first large-scale hydrodynamic model for this region to estimate the volume of water stored in the corresponding floodplains and to investigate how important these floodplains are to the behaviour of the overall system. This hydrodynamic model covers an area over 1.6 million square kilometres and 13 thousand kilometres of rivers and is calibrated to water surface heights at 33 virtual gauging stations obtained from ESA's Envisat satellite. Our results show that the use of different sources of discharge estimations and calibration via Envisat observations can produce accurate water levels and downstream discharges. Our model produced un-biased (bias =-0.08 m), sub-metre Root Mean Square Error (RMSE =0.862 m) with a Nash-Sutcliffe efficiency greater than 80% (NSE =0.81). The spatial-temporal variations in our simulated inundated areas are consistent with the pattern obtained from satellites. Overall, we find a high correlation coefficient (R =0.88) between our modelled inundated areas and those estimated from satellites.
NASA Astrophysics Data System (ADS)
Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.
2013-01-01
Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.
Gyrokinetic modelling of the quasilinear particle flux for plasmas with neutral-beam fuelling
NASA Astrophysics Data System (ADS)
Narita, E.; Honda, M.; Nakata, M.; Yoshida, M.; Takenaga, H.; Hayashi, N.
2018-02-01
A quasilinear particle flux is modelled based on gyrokinetic calculations. The particle flux is estimated by determining factors, namely, coefficients of off-diagonal terms and a particle diffusivity. In this paper, the methodology to estimate the factors is presented using a subset of JT-60U plasmas. First, the coefficients of off-diagonal terms are estimated by linear gyrokinetic calculations. Next, to obtain the particle diffusivity, a semi-empirical approach is taken. Most experimental analyses for particle transport have assumed that turbulent particle fluxes are zero in the core region. On the other hand, even in the stationary state, the plasmas in question have a finite turbulent particle flux due to neutral-beam fuelling. By combining estimates of the experimental turbulent particle flux and the coefficients of off-diagonal terms calculated earlier, the particle diffusivity is obtained. The particle diffusivity should reflect a saturation amplitude of instabilities. The particle diffusivity is investigated in terms of the effects of the linear instability and linear zonal flow response, and it is found that a formula including these effects roughly reproduces the particle diffusivity. The developed framework for prediction of the particle flux is flexible to add terms neglected in the current model. The methodology to estimate the quasilinear particle flux requires so low computational cost that a database consisting of the resultant coefficients of off-diagonal terms and particle diffusivity can be constructed to train a neural network. The development of the methodology is the first step towards a neural-network-based particle transport model for fast prediction of the particle flux.
NASA Astrophysics Data System (ADS)
Chelle-Michou, Cyril; Chiaradia, Massimo
2017-12-01
Chlorine and sulfur are of paramount importance for supporting the transport and deposition of ore metals at magmatic-hydrothermal systems such as the Coroccohuayco Fe-Cu-Au porphyry-skarn deposit, Peru. Here, we used recent partitioning models to determine the Cl and S concentration of the melts from the Coroccohuayco magmatic suite using apatite and amphibole chemical analyses. The pre-mineralization gabbrodiorite complex hosts S-poor apatite, while the syn- and post-ore dacitic porphyries host S-rich apatite. Our apatite data on the Coroccohuayco magmatic suite are consistent with an increasing oxygen fugacity (from the gabbrodiorite complex to the porphyries) causing the dominant sulfur species to shift from S2- to S6+ at upper crustal pressure where the magmas were emplaced. We suggest that this change in sulfur speciation could have favored S degassing, rather than its sequestration in magmatic sulfides. Using available partitioning models for apatite from the porphyries, pre-degassing S melt concentration was 20-200 ppm. Estimates of absolute magmatic Cl concentrations using amphibole and apatite gave highly contrasting results. Cl melt concentrations obtained from apatite (0.60 wt% for the gabbrodiorite complex; 0.2-0.3 wt% for the porphyries) seems much more reasonable than those obtained from amphibole which are very low (0.37 wt% for the gabbrodiorite complex; 0.10 wt% for the porphyries). In turn, relative variations of the Cl melt concentrations obtained from amphibole during magma cooling are compatible with previous petrological constraints on the Coroccohuayco magmatic suite. This confirms that the gabbrodioritic magma was initially fluid undersaturated upon emplacement, and that magmatic fluid exsolution of the gabbrodiorite and the pluton rooting the porphyry stocks and dikes were emplaced and degassed at 100-200 MPa. Finally, mass balance constraints on S, Cu and Cl were used to estimate the minimum volume of magma required to form the Coroccohuayco deposit. These three estimates are remarkably consistent among each other (ca. 100 km3) and suggest that the Cl melt concentration is at least as critical as that of Cu and S to form an economic mineralization.
Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R
2014-01-01
Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.
Trajectory prediction for ballistic missiles based on boost-phase LOS measurements
NASA Astrophysics Data System (ADS)
Yeddanapudi, Murali; Bar-Shalom, Yaakov
1997-10-01
This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.
A large-aperture low-cost hydrophone array for tracking whales from small boats.
Miller, B; Dawson, S
2009-11-01
A passive sonar array designed for tracking diving sperm whales in three dimensions from a single small vessel is presented, and the advantages and limitations of operating this array from a 6 m boat are described. The system consists of four free floating buoys, each with a hydrophone, built-in recorder, and global positioning system receiver (GPS), and one vertical stereo hydrophone array deployed from the boat. Array recordings are post-processed onshore to obtain diving profiles of vocalizing sperm whales. Recordings are synchronized using a GPS timing pulse recorded onto each track. Sensitivity analysis based on hyperbolic localization methods is used to obtain probability distributions for the whale's three-dimensional location for vocalizations received by at least four hydrophones. These localizations are compared to those obtained via isodiachronic sequential bound estimation. Results from deployment of the system around a sperm whale in the Kaikoura Canyon in New Zealand are shown.
Oceanographic and meteorological research based on the data products of SEASAT
NASA Technical Reports Server (NTRS)
Pierson, W. J. (Principal Investigator)
1983-01-01
De-aliased SEASAT SASS vector winds obtained during the GOASEX (Gulf of Alaska SEASAT Experiment) program were processed to obtain superobservations centered on a one degree by one degree grid. The results provide values for the combined effects of mesoscale variability and communication noise on the individual SASS winds. Each grid point of the synoptic field provides the mean synoptic east-west and north-south wind components plus estimates of the standard deviations of these means. These superobservations winds are then processed further to obtain synoptic scale vector winds stress fiels, the horizontal divergence of the wind, the curl of the wind stress and the vertical velocity at 200 m above the sea surface, each with appropriate standard deviations for each grid point value. The resulting fields appear to be consistant over large distances and to agree with, for example, geostationary cloud images obtained concurrently. Their quality is far superior to that of analyses based on conventional data.
NASA Astrophysics Data System (ADS)
Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.
2017-10-01
A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.
NASA Astrophysics Data System (ADS)
Manzoni, Francesco; Ryde, Ulf
2018-03-01
We have calculated relative binding affinities for eight tetrafluorophenyl-triazole-thiogalactoside inhibitors of galectin-3 with the alchemical free-energy perturbation approach. We obtain a mean absolute deviation from experimental estimates of only 2-3 kJ/mol and a correlation coefficient (R 2) of 0.5-0.8 for seven relative affinities spanning a range of up to 11 kJ/mol. We also studied the effect of using different methods to calculate the charges of the inhibitor and different sizes of the perturbed group (the atoms that are described by soft-core potentials and are allowed to have differing coordinates). However, the various approaches gave rather similar results and it is not possible to point out one approach as consistently and significantly better than the others. Instead, we suggest that such small and reasonable variations in the computational method can be used to check how stable the calculated results are and to obtain a more accurate estimate of the uncertainty than if performing only one calculation with a single computational setup.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Perceptual color difference metric including a CSF based on the perception threshold
NASA Astrophysics Data System (ADS)
Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine
2008-01-01
The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.
McKendrick, Jan; Gijsen, Merel; Quinn, Casey; Barber, Beth; Zhao, Zhongyun
2016-06-01
Objectives Studies reporting healthcare resourse use (HRU) for melanoma, one of the most costly cancers to treat, are limited. Using consistent, robust methodology, this study estimated HRU associated with the treatment of metastatic melanoma in eight countries. Methods Using published literature and clinician input, treatment phases were identified: active systemic treatment (pre-progression); disease progression; best supportive care (BSC)/palliative care; and terminal care. HRU elements were identified for each phase and estimates of the magnitude and frequency of use in clinical practice were obtained through country-specific Delphi panels, comprising healthcare professionals with experience in oncology (n = 8). Results Medical oncologists are the key care providers for patients with metastatic melanoma, although in Germany dermato-oncologists also lead care. During the active systemic treatment phase, each patient was estimated to require 0.83-2 consultations with a medical oncologist/month across countries; the median number of such assessments in 3 months was highest in Canada (range = 3.5-5) and lowest in France, the Netherlands and Spain (1). Resource use during the disease progression phase was intensive and similar across countries: all patients were estimated to consult with medical oncologists and 10-40% with a radiation oncologist; up to 40% were estimated to require a brain MRI scan. During the BSC/palliative care phase, all patients were estimated to consult with medical oncologists, and most to consult with a primary care physician (40-100%). Limitations Panelists were from centers of excellence, thus results may not reflect care within smaller hospitals; data obtained from experts may be less variable than data from broader clinical practice. Treatments for metastatic melanoma are continually emerging, thus some elements of our work could be superseded. Conclusions HRU estimates were substantial and varied across countries for some resources. These data could be used with country-specific costs to elucidate costs for the management of metastatic melanoma.
Particulate air pollution and panel studies in children: a systematic review
Ward, D; Ayres, J
2004-01-01
Aims: To systematically review the results of such studies in children, estimate summary measures of effect, and investigate potential sources of heterogeneity. Methods: Studies were identified by searching electronic databases to June 2002, including those where outcomes and particulate level measurements were made at least daily for ⩾8 weeks, and analysed using an appropriate regression model. Study results were compared using forest plots, and fixed and random effects summary effect estimates obtained. Publication bias was considered using a funnel plot. Results: Twenty two studies were identified, all except two reporting PM10 (24 hour mean) >50 µg.m-3. Reported effects of PM10 on PEF were widely spread and smaller than those for PM2.5 (fixed effects summary: -0.012 v -0.063 l.min-1 per µg.m-3 rise). A similar pattern was evident for symptoms. Random effects models produced larger estimates. Overall, in between-study comparisons, panels of children with diagnosed asthma or pre-existing respiratory symptoms appeared less affected by PM10 levels than those without, and effect estimates were larger where studies were conducted in higher ozone conditions. Larger PM10 effect estimates were obtained from studies using generalised estimating equations to model autocorrelation and where results were derived by pooling subject specific regression coefficients. A funnel plot of PM10 results for PEF was markedly asymmetrical. Conclusions: The majority of identified studies indicate an adverse effect of particulate air pollution that is greater for PM2.5 than PM10. However, results show considerable heterogeneity and there is evidence consistent with publication bias, so limited confidence may be placed on summary estimates of effect. The possibility of interaction between particle and ozone effects merits further investigation, as does variability due to analytical differences that alter the interpretation of final estimates. PMID:15031404
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade
2014-01-01
Objective This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. Materials and Methods A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. Results With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. Conclusion The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity. PMID:25741101
Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia
de Oliveira, Gabriel; Brunsell, Nathaniel A.; Moraes, Elisabete C.; Bertani, Gabriel; dos Santos, Thiago V.; Shimabukuro, Yosio E.; Aragão, Luiz E. O. C.
2016-01-01
In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001–December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance. PMID:27347957
Uptake and storage of anthropogenic CO2 in the pacific ocean estimated using two modeling approaches
NASA Astrophysics Data System (ADS)
Li, Yangchun; Xu, Yongfu
2012-07-01
A basin-wide ocean general circulation model (OGCM) of the Pacific Ocean is employed to estimate the uptake and storage of anthropogenic CO2 using two different simulation approaches. The simulation (named BIO) makes use of a carbon model with biological processes and full thermodynamic equations to calculate surface water partial pressure of CO2, whereas the other simulation (named PTB) makes use of a perturbation approach to calculate surface water partial pressure of anthropogenic CO2. The results from the two simulations agree well with the estimates based on observation data in most important aspects of the vertical distribution as well as the total inventory of anthropogenic carbon. The storage of anthropogenic carbon from BIO is closer to the observation-based estimate than that from PTB. The Revelle factor in 1994 obtained in BIO is generally larger than that obtained in PTB in the whole Pacific, except for the subtropical South Pacific. This, to large extent, leads to the difference in the surface anthropogenic CO2 concentration between the two runs. The relative difference in the annual uptake between the two runs is almost constant during the integration processes after 1850. This is probably not caused by dissolved inorganic carbon (DIC), but rather by a factor independent of time. In both runs, the rate of change in anthropogenic CO2 fluxes with time is consistent with the rate of change in the growth rate of atmospheric partial pressure of CO2.
Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia.
de Oliveira, Gabriel; Brunsell, Nathaniel A; Moraes, Elisabete C; Bertani, Gabriel; Dos Santos, Thiago V; Shimabukuro, Yosio E; Aragão, Luiz E O C
2016-06-24
In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001-December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance.
NASA Astrophysics Data System (ADS)
Moon, Byung-Young
2005-12-01
The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.
Quark degrees of freedom in the production of soft pion jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okorokov, V. A., E-mail: VAOkorokov@mephi.ru, E-mail: Okorokov@bnl.gov
2015-05-15
Experimental results obtained by studying the properties of soft jets in the 4-velocity space at √s ∼ 2 to 20 GeV are presented. The changes in the mean distance from the jet axis to the jet particles, the mean kinetic energy of these particles, and the cluster dimension in response to the growth of the collision energy are consistent with the assumption that quark degrees of freedom manifest themselves in processes of pion-jet production at intermediate energies. The energy at which quark degrees of freedom begin to manifest themselves experimentally in the production of soft pion jets is estimated formore » the first time. The estimated value of this energy is 2.8 ± 0.6 GeV.« less
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
Risk assessment for juvenile justice: a meta-analysis.
Schwalbe, Craig S
2007-10-01
Risk assessment instruments are increasingly employed by juvenile justice settings to estimate the likelihood of recidivism among delinquent juveniles. In concert with their increased use, validation studies documenting their predictive validity have increased in number. The purpose of this study was to assess the average predictive validity of juvenile justice risk assessment instruments and to identify risk assessment characteristics that are associated with higher predictive validity. A search of the published and grey literature yielded 28 studies that estimated the predictive validity of 28 risk assessment instruments. Findings of the meta-analysis were consistent with effect sizes obtained in larger meta-analyses of criminal justice risk assessment instruments and showed that brief risk assessment instruments had smaller effect sizes than other types of instruments. However, this finding is tentative owing to limitations of the literature.
Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud
2017-01-01
In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.
Sieradzka, Dominika; Power, Robert A; Freeman, Daniel; Cardno, Alastair G; Dudbridge, Frank; Ronald, Angelica
2015-09-01
Occurrence of psychotic experiences is common amongst adolescents in the general population. Twin studies suggest that a third to a half of variance in adolescent psychotic experiences is explained by genetic influences. Here we test the extent to which common genetic variants account for some of the twin-based heritability. Psychotic experiences were assessed with the Specific Psychotic Experiences Questionnaire in a community sample of 2152 16-year-olds. Self-reported measures of Paranoia, Hallucinations, Cognitive Disorganization, Grandiosity, Anhedonia, and Parent-rated Negative Symptoms were obtained. Estimates of SNP heritability were derived and compared to the twin heritability estimates from the same sample. Three approaches to genome-wide restricted maximum likelihood (GREML) analyses were compared: (1) standard GREML performed on full genome-wide data; (2) GREML stratified by minor allele frequency (MAF); and (3) GREML performed on pruned data. The standard GREML revealed a significant SNP heritability of 20 % for Anhedonia (SE = 0.12; p < 0.046) and an estimate of 19 % for Cognitive Disorganization, which was close to significant (SE = 0.13; p < 0.059). Grandiosity and Paranoia showed modest SNP heritability estimates (17 %; SE = 0.13 and 14 %; SE = 0.13, respectively, both n.s.), and zero estimates were found for Hallucinations and Negative Symptoms. The estimates for Anhedonia, Cognitive Disorganization and Grandiosity accounted for approximately half the previously reported twin heritability. SNP heritability estimates from the MAF-stratified approach were mostly consistent with the standard estimates and offered additional information about the distribution of heritability across the MAF range of the SNPs. In contrast, the estimates derived from the pruned data were for the most part not consistent with the other two approaches. It is likely that the difference seen in the pruned estimates was driven by the loss of tagged causal variants, an issue fundamental to this approach. The current results suggest that common genetic variants play a role in the etiology of some adolescent psychotic experiences, however further research on larger samples is desired and the use of MAF-stratified approach recommended.
How Much Can Remotely-Sensed Natural Resource Inventories Benefit from Finer Spatial Resolutions?
NASA Astrophysics Data System (ADS)
Hou, Z.; Xu, Q.; McRoberts, R. E.; Ståhl, G.; Greenberg, J. A.
2017-12-01
For remote sensing facilitated natural resource inventories, the effects of spatial resolution in the form of pixel size and the effects of subpixel information on estimates of population parameters were evaluated by comparing results obtained using Landsat 8 and RapidEye auxiliary imagery. The study area was in Burkina Faso, and the variable of interest was the stem volume (m3/ha) convertible to the woodland aboveground biomass. A sample consisting of 160 field plots was selected and measured from the population following a two-stage sampling design. Models were fit using weighted least squares; the population mean, mu, and the variance of the estimator of the population mean, Var(mu.hat), were estimated in two inferential frameworks, model-based and model-assisted, and compared; for each framework, Var(mu.hat) was estimated both analytically and empirically. Empirical variances were estimated with bootstrapping that for resampling takes clustering effects into account. The primary results were twofold. First, for the effects of spatial resolution and subpixel information, four conclusions are relevant: (1) finer spatial resolution imagery indeed contributes to greater precision for estimators of population parameter, but this increase is slight at a maximum rate of 20% considering that RapidEye data are 36 times finer resolution than Landsat 8 data; (2) subpixel information on texture is marginally beneficial when it comes to making inference for population of large areas; (3) cost-effectiveness is more favorable for the free of charge Landsat 8 imagery than RapidEye imagery; and (4) for a given plot size, candidate remote sensing auxiliary datasets are more cost-effective when their spatial resolutions are similar to the plot size than with much finer alternatives. Second, for the comparison between estimators, three conclusions are relevant: (1) model-based variance estimates are consistent with each other and about half as large as stabilized model-assisted estimates, suggesting superior effectiveness of model-based inference to model-assisted inference; (2) bootstrapping is an effective alternative to analytical variance estimators; and (3) prediction accuracy expressed by RMSE is useful for screening candidate models to be used for population inferences.
Thermoelectric Properties for a Suspended Microribbon of Quasi-One-Dimensional TiS3
NASA Astrophysics Data System (ADS)
Sakuma, Tasuku; Nishino, Shunsuke; Miyata, Masanobu; Koyano, Mikio
2018-06-01
Transition-metal trichalcogenides MX3 (M = Ti, Zr, Nb, Ta; X = S, Se) are well-known inorganic quasi-one-dimensional conductors. Among them, we have investigated the thermoelectric properties of titanium trisulfide TiS3 microribbon. The electrical resistivity ρ, thermal conductivity κ, and thermoelectric power S were measured using 3 ω method. The weight mean values were found to be ρ = 5 mω m and κ = 10 W K-1 m-1 along the one-dimensional direction ( b-axis) of the TiS3 microribbon. Combined with the thermoelectric power S = -530 μV K-1, the figure of merit was calculated as ZT = 0.0023. This efficiency is the same as that of randomly oriented bulk TiS3. We also estimated the anisotropy of σ and κ using the present results and those for randomly oriented bulk material. The obtained weak anisotropy for TiS3 is attributable to strong coupling between triangular columns consisting of TiS3 units. These experimental results are consistent with theoretical results obtained using density functional theory (DFT) calculations.
Gallium diffusion in zinc oxide via the paired dopant-vacancy mechanism
NASA Astrophysics Data System (ADS)
Sky, T. N.; Johansen, K. M.; Riise, H. N.; Svensson, B. G.; Vines, L.
2018-02-01
Isochronal and isothermal diffusion experiments of gallium (Ga) in zinc oxide (ZnO) have been performed in the temperature range of 900-1050 °C. The samples used consisted of a sputter-deposited and highly Ga-doped ZnO film at the surface of a single-crystal bulk material. We use a novel reaction diffusion (RD) approach to demonstrate that the diffusion behavior of Ga in ZnO is consistent with zinc vacancy (VZn) mediation via the formation and dissociation of GaZnVZn complexes. In the RD modeling, experimental diffusion data are fitted utilizing recent density-functional-theory estimates of the VZn formation energy and the binding energy of GaZnVZn. From the RD modeling, a migration energy of 2.3 eV is deduced for GaZnVZn, and a total/effective activation energy of 3.0 eV is obtained for the Ga diffusion. Furthermore, and for comparison, employing the so-called Fair model, a total/effective activation energy of 2.7 eV is obtained for the Ga diffusion, reasonably close to the total value extracted from the RD-modeling.
NASA Astrophysics Data System (ADS)
Arab, M.; Khodam-Mohammadi, A.
2018-03-01
As a deformed matter bounce scenario with a dark energy component, we propose a deformed one with running vacuum model (RVM) in which the dark energy density ρ _{Λ } is written as a power series of H^2 and \\dot{H} with a constant equation of state parameter, same as the cosmological constant, w=-1. Our results in analytical and numerical point of views show that in some cases same as Λ CDM bounce scenario, although the spectral index may achieve a good consistency with observations, a positive value of running of spectral index (α _s) is obtained which is not compatible with inflationary paradigm where it predicts a small negative value for α _s. However, by extending the power series up to H^4, ρ _{Λ }=n_0+n_2 H^2+n_4 H^4, and estimating a set of consistent parameters, we obtain the spectral index n_s, a small negative value of running α _s and tensor to scalar ratio r, which these reveal a degeneracy between deformed matter bounce scenario with RVM-DE and inflationary cosmology.
Thermoelectric Properties for a Suspended Microribbon of Quasi-One-Dimensional TiS3
NASA Astrophysics Data System (ADS)
Sakuma, Tasuku; Nishino, Shunsuke; Miyata, Masanobu; Koyano, Mikio
2018-02-01
Transition-metal trichalcogenides MX3 (M = Ti, Zr, Nb, Ta; X = S, Se) are well-known inorganic quasi-one-dimensional conductors. Among them, we have investigated the thermoelectric properties of titanium trisulfide TiS3 microribbon. The electrical resistivity ρ, thermal conductivity κ, and thermoelectric power S were measured using 3ω method. The weight mean values were found to be ρ = 5 mω m and κ = 10 W K-1 m-1 along the one-dimensional direction (b-axis) of the TiS3 microribbon. Combined with the thermoelectric power S = -530 μV K-1, the figure of merit was calculated as ZT = 0.0023. This efficiency is the same as that of randomly oriented bulk TiS3. We also estimated the anisotropy of σ and κ using the present results and those for randomly oriented bulk material. The obtained weak anisotropy for TiS3 is attributable to strong coupling between triangular columns consisting of TiS3 units. These experimental results are consistent with theoretical results obtained using density functional theory (DFT) calculations.
Intelligent vehicle safety control strategy in various driving situations
NASA Astrophysics Data System (ADS)
Moon, Seungwuk; Cho, Wanki; Yi, Kyongsu
2010-12-01
This paper describes a safety control strategy for intelligent vehicles with the objective of optimally coordinating the throttle, brake, and active front steering actuator inputs to obtain both lateral stability and longitudinal safety. The control system consists of a supervisor, control algorithms, and a coordinator. From the measurement and estimation signals, the supervisor determines the active control modes among normal driving, longitudinal safety, lateral stability, and integrated safety control mode. The control algorithms consist of longitudinal and lateral stability controllers. The longitudinal controller is designed to improve the driver's comfort during normal, safe-driving situations, and to avoid rear-end collision in vehicle-following situations. The lateral stability controller is designed to obtain the required manoeuvrability and to limit the vehicle body's side-slip angle. To obtain both longitudinal safety and lateral stability control in various driving situations, the coordinator optimally determines the throttle, brake, and active front steering inputs based on the current status of the subject vehicle. Closed-loop simulations with the driver-vehicle-controller system are conducted to investigate the performance of the proposed control strategy. From these simulation results, it is shown that the proposed control algorithm assists the driver in combined severe braking/large steering manoeuvring so that the driver can maintain good manoeuvrability and prevent the vehicle from crashing in vehicle-following situations.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Genetic parameter estimation for long endurance trials in the Uruguayan Criollo horse.
López-Correa, R D; Peñagaricano, F; Rovere, G; Urioste, J I
2018-06-01
The aim of this study was to estimate the genetic parameters of performance in a 750-km, 15-day ride in Criollo horses. Heritability (h 2 ) and maternal lineage effects (mt 2 ) were obtained for rank, a relative placing measure of performance. Additive genetic and maternal lineage (rmt) correlations among five medium-to-high intensity phase ranks (pRK) and final rank (RK) were also estimated. Individual records from 1,236 Criollo horses from 1979 to 2012 were used. A multivariate threshold animal model was applied to the pRK and RK. Heritability was moderate to low (0.156-0.275). Estimates of mt 2 were consistently low (0.04-0.06). Additive genetic correlations between individual pRK and RK were high (0.801-0.924), and the genetic correlations between individual pRKs ranged from 0.763 to 0.847. The pRK heritabilities revealed that some phases were explained by a greater additive component, whereas others showed stronger genetic relationships with RK. Thus, not all pRK may be considered as similar measures of performance in competition. © 2018 Blackwell Verlag GmbH.
A new mode of fear expression: perceptual bias in height fear.
Teachman, Bethany A; Stefanucci, Jeanine K; Clerkin, Elise M; Cody, Meghan W; Proffitt, Dennis R
2008-04-01
Emotion and psychopathology researchers have described the fear response as consisting of four main components--subjective affect, physiology, cognition, and behavior. The current study provides evidence for an additional component in the domain of height fear (perception) and shows that it is distinct from measures of cognitive processing. Individuals High (N = 35) and Low (N = 36) in acrophobic symptoms looked over a two-story balcony ledge and estimated its vertical extent using a direct height estimation task (visual matching), and an indirect task (size estimation); the latter task seems to exhibit little influence from cognitive factors. In addition, implicit and explicit measures of cognitive processing were obtained. Results indicated that, as expected, the High Fear group showed greater relative, implicit height fear associations and explicit threat cognitions. Of primary interest, the High (compared to Low) Fear group estimated the vertical extent to be higher, and judged target sizes to be greater, even when controlling for the cognitive bias measures. These results suggest that emotional factors such as fear are related to perception. (Copyright) 2008 APA.
Estimate of procession and polar motion errors from planetary encounter station location solutions
NASA Technical Reports Server (NTRS)
Pease, G. E.
1978-01-01
Jet Propulsion Laboratory Deep Space Station (DSS) location solutions based on two JPL planetary ephemerides, DE 84 and DE 96, at eight planetary encounters were used to obtain weighted least squares estimates of precession and polar motion errors. The solution for precession error in right ascension yields a value of 0.3 X 10 to the minus 5 power plus or minus 0.8 X 10 to the minus 6 power deg/year. This maps to a right ascension error of 1.3 X 10 to the minus 5 power plus or minus 0.4 X 10 to the minus 5 power deg at the first Voyager 1979 Jupiter encounter if the current JPL DSS location set is used. Solutions for precession and polar motion using station locations based on DE 84 agree well with the solution using station locations referenced to DE 96. The precession solution removes the apparent drift in station longitude and spin axis distance estimates, while the encounter polar motion solutions consistently decrease the scatter in station spin axis distance estimates.
Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.
Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin
2017-08-10
Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
Modeling misidentification errors that result from use of genetic tags in capture-recapture studies
Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.
2011-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.
Aad, G; Abajyan, T; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdelalim, A A; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Acharya, B S; Adamczyk, L; Adams, D L; Addy, T N; Adelman, J; Adomeit, S; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Agustoni, M; Aharrouche, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahsan, M; Aielli, G; Akdogan, T; Åkesson, T P A; Akimoto, G; Akimov, A V; Alam, M S; Alam, M A; Albert, J; Albrand, S; Aleksa, M; Aleksandrov, I N; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Allbrooke, B M M; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alonso, F; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amelung, C; Ammosov, V V; Amor Dos Santos, S P; Amorim, A; Amram, N; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Andrieux, M-L; Anduaga, X S; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aoun, S; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Arfaoui, S; Arguin, J-F; Arik, E; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnault, C; Artamonov, A; Artoni, G; Arutinov, D; Asai, S; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astbury, A; Atkinson, M; Aubert, B; Auge, E; Augsten, K; Aurousseau, M; Avolio, G; Avramidou, R; Axen, D; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A M; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagnaia, P; Bahinipati, S; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, M D; Baker, S; Balek, P; Banas, E; Banerjee, P; Banerjee, Sw; Banfi, D; Bangert, A; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barbaro Galtieri, A; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Barrillon, P; Bartoldus, R; Barton, A E; Bartsch, V; Basye, A; Bates, R L; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Bauer, F; Bawa, H S; Beale, S; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, A K; Becker, S; Beckingham, M; Becks, K H; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Begel, M; Behar Harpaz, S; Behera, P K; Beimforde, M; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellomo, M; Belloni, A; Beloborodova, O; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernat, P; Bernhard, R; Bernius, C; Berry, T; Bertella, C; Bertin, A; Bertolucci, F; Besana, M I; Besjes, G J; Besson, N; Bethke, S; Bhimji, W; Bianchi, R M; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bittner, B; Black, K M; Blair, R E; Blanchard, J-B; Blanchot, G; Blazek, T; Bloch, I; Blocker, C; Blocki, J; Blondel, A; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V B; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boelaert, N; Bogaerts, J A; Bogdanchikov, A; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Bolnet, N M; Bomben, M; Bona, M; Boonekamp, M; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borri, M; Borroni, S; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boveia, A; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Branchini, P; Brandenburg, G W; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Bremer, J; Brendlinger, K; Brenner, R; Bressler, S; Britton, D; Brochu, F M; Brock, I; Brock, R; Broggi, F; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brown, G; Brown, H; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Buanes, T; Buat, Q; Bucci, F; Buchanan, J; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Budick, B; Büscher, V; Bugge, L; Bulekov, O; Bundock, A C; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Buttinger, W; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Calvet, S; Camacho Toro, R; Camarri, P; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Cantrill, R; Capasso, L; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capriotti, D; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, B; Caron, S; Carquin, E; Carrillo Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Cascella, M; Caso, C; Castaneda Hernandez, A M; Castaneda-Miranda, E; Castillo Gimenez, V; Castro, N F; Cataldi, G; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cavaliere, V; Cavalleri, P; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, K; Chang, P; Chapleau, B; Chapman, J D; Chapman, J W; Chareyre, E; Charlton, D G; Chavda, V; Chavez Barajas, C A; Cheatham, S; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, S; Chen, X; Chen, Y; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Cheung, S L; Chevalier, L; Chiefari, G; Chikovani, L; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choudalakis, G; Chouridou, S; Christidi, I A; Christov, A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocca, C; Ciocio, A; Cirilli, M; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, B; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cogneras, E; Colas, J; Cole, S; Colijn, A P; Collins, N J; Collins-Tooth, C; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Courneyea, L; Cowan, G; Cowden, C; Cox, B E; Cranmer, K; Crescioli, F; Cristinziani, M; Crosetti, G; Crépé-Renaudin, S; Cuciuc, C-M; Cuenca Almenar, C; Cuhadar Donszelmann, T; Curatolo, M; Curtis, C J; Cuthbert, C; Cwetanski, P; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; D'Orazio, A; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dallapiccola, C; Dam, M; Dameri, M; Damiani, D S; Danielsson, H O; Dao, V; Darbo, G; Darlea, G L; Dassoulas, J A; Davey, W; Davidek, T; Davidson, N; Davidson, R; Davies, E; Davies, M; Davignon, O; Davison, A R; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De La Taille, C; De la Torre, H; De Lorenzi, F; de Mora, L; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Degenhardt, J; Del Papa, C; Del Peso, J; Del Prete, T; Delemontex, T; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Della Pietra, M; della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demirkoz, B; Deng, J; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Devetak, E; Deviveiros, P O; Dewhurst, A; DeWilde, B; Dhaliwal, S; Dhullipudi, R; Di Ciaccio, A; Di Ciaccio, L; Di Girolamo, A; Di Girolamo, B; Di Luise, S; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dindar Yagci, K; Dingfelder, J; Dinut, F; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobbs, M; Dobinson, R; Dobos, D; Dobson, E; Dodd, J; Doglioni, C; Doherty, T; Doi, Y; Dolejsi, J; Dolenc, I; Dolezal, Z; Dolgoshein, B A; Dohmae, T; Donadelli, M; Donini, J; Dopke, J; Doria, A; Dos Anjos, A; Dotti, A; Dova, M T; Doxiadis, A D; Doyle, A T; Dressnandt, N; Dris, M; Dubbert, J; Dube, S; Duchovni, E; Duckeck, G; Duda, D; Dudarev, A; Dudziak, F; Dührssen, M; Duerdoth, I P; Duflot, L; Dufour, M-A; Duguid, L; Dunford, M; Duran Yildiz, H; Duxfield, R; Dwuznik, M; Dydak, F; Düren, M; Ebenstein, W L; Ebke, J; Eckweiler, S; Edmonds, K; Edson, W; Edwards, C A; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Eisenhandler, E; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, K; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Engelmann, R; Engl, A; Epp, B; Erdmann, J; Ereditato, A; Eriksson, D; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Espinal Curull, X; Esposito, B; Etienne, F; Etienvre, A I; Etzion, E; Evangelakou, D; Evans, H; Fabbri, L; Fabre, C; Fakhrutdinov, R M; Falciano, S; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farley, J; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Fatholahzadeh, B; Favareto, A; Fayard, L; Fazio, S; Febbraro, R; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feligioni, L; Fellmann, D; Feng, C; Feng, E J; Fenyuk, A B; Ferencei, J; Fernando, W; Ferrag, S; Ferrando, J; Ferrara, V; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filthaut, F; Fincke-Keeler, M; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, G; Fisher, M J; Flechl, M; Fleck, I; Fleckner, J; Fleischmann, P; Fleischmann, S; Flick, T; Floderus, A; Flores Castillo, L R; Flowerdew, M J; Fonseca Martin, T; Formica, A; Forti, A; Fortin, D; Fournier, D; Fowler, A J; Fox, H; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Frank, T; Franklin, M; Franz, S; Fraternali, M; Fratina, S; French, S T; Friedrich, C; Friedrich, F; Froeschl, R; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gadfort, T; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Gan, K K; Gao, Y S; Gaponenko, A; Garberson, F; Garcia-Sciveres, M; García, C; García Navarro, J E; Gardner, R W; Garelli, N; Garitaonandia, H; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerlach, P; Gershon, A; Geweniger, C; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giakoumopoulou, V; Giangiobbe, V; Gianotti, F; Gibbard, B; Gibson, A; Gibson, S M; Gillberg, D; Gillman, A R; Gingrich, D M; Ginzburg, J; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giovannini, P; Giraud, P F; Giugni, D; Giunta, M; Giusti, P; Gjelsten, B K; Gladilin, L K; Glasman, C; Glatzer, J; Glazov, A; Glitza, K W; Glonti, G L; Goddard, J R; Godfrey, J; Godlewski, J; Goebel, M; Göpfert, T; Goeringer, C; Gössling, C; Goldfarb, S; Golling, T; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goodson, J J; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Gosdzik, B; Goshaw, A T; Gosselink, M; Gostkin, M I; Gough Eschrich, I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramstad, E; Grancagnolo, F; Grancagnolo, S; Grassi, V; Gratchev, V; Grau, N; Gray, H M; Gray, J A; Graziani, E; Grebenyuk, O G; Greenshaw, T; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Gross, E; Grosse-Knetter, J; Groth-Jensen, J; Grybel, K; Guest, D; Guicheney, C; Guindon, S; Gul, U; Gunther, J; Guo, B; Guo, J; Gutierrez, P; Guttman, N; Gutzwiller, O; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haas, S; Haber, C; Hadavand, H K; Hadley, D R; Haefner, P; Hahn, F; Haider, S; Hajduk, Z; Hakobyan, H; Hall, D; Hamacher, K; Hamal, P; Hamer, M; Hamilton, A; Hamilton, S; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Handel, C; Hanke, P; Hansen, J R; Hansen, J B; Hansen, J D; Hansen, P H; Hansson, P; Hara, K; Hare, G A; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Hartert, J; Hartjes, F; Haruyama, T; Harvey, A; Hasegawa, S; Hasegawa, Y; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayakawa, T; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Hedberg, V; Heelan, L; Heim, S; Heinemann, B; Heisterkamp, S; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, R C W; Henke, M; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Henß, T; Hernandez, C M; Hernández Jiménez, Y; Herrberg, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Higón-Rodriguez, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirsch, F; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holder, M; Holmgren, S O; Holy, T; Holzbauer, J L; Hong, T M; Hooft van Huysduynen, L; Horner, S; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, P J; Hsu, S-C; Hu, D; Hubacek, Z; Hubaut, F; Huegging, F; Huettmann, A; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibbotson, M; Ibragimov, I; Iconomidou-Fayard, L; Idarraga, J; Iengo, P; Igonkina, O; Ikegami, Y; Ikeno, M; Iliadis, D; Ilic, N; Ince, T; Inigo-Golfin, J; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Jantsch, A; Janus, M; Jarlskog, G; Jeanty, L; Jen-La Plante, I; Jennens, D; Jenni, P; Loevschall-Jensen, A E; Jež, P; Jézéquel, S; Jha, M K; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinnouchi, O; Joergensen, M D; Joffe, D; Johansen, M; Johansson, K E; Johansson, P; Johnert, S; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Joram, C; Jorge, P M; Joshi, K D; Jovicevic, J; Jovin, T; Ju, X; Jung, C A; Jungst, R M; Juranek, V; Jussel, P; Juste Rozas, A; Kabana, S; Kaci, M; Kaczmarska, A; Kadlecik, P; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalinin, S; Kalinovskaya, L V; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kaplon, J; Kar, D; Karagounis, M; Karakostas, K; Karnevskiy, M; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, M; Kataoka, Y; Katsoufis, E; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kayl, M S; Kazama, S; Kazanin, V A; Kazarinov, M Y; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Kekelidze, G D; Keller, J S; Kenyon, M; Kepka, O; Kerschen, N; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharchenko, D; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kitamura, T; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klemetti, M; Klier, A; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klinkby, E B; Klioutchnikova, T; Klok, P F; Klous, S; Kluge, E-E; Kluge, T; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Ko, B R; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Köneke, K; König, A C; Koenig, S; Köpke, L; Koetsveld, F; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohn, F; Kohout, Z; Kohriki, T; Koi, T; Kolachev, G M; Kolanoski, H; Kolesnikov, V; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kono, T; Kononov, A I; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Korcyl, K; Kordas, K; Korn, A; Korol, A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kreiss, S; Krejci, F; Kretzschmar, J; Krieger, N; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, T; Kuhn, D; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kummer, C; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurata, M; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwee, R; La Rosa, A; La Rotonda, L; Labarga, L; Labbe, J; Lablak, S; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laisne, E; Lamanna, M; Lambourne, L; Lampen, C L; Lampl, W; Lancon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Larner, A; Lassnig, M; Laurelli, P; Lavorini, V; Lavrijsen, W; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, M; Legendre, M; Legger, F; Leggett, C; Lehmacher, M; Lehmann Miotto, G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Lendermann, V; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leonhardt, K; Leontsinis, S; Lepold, F; Leroy, C; Lessard, J-R; Lester, C G; Lester, C M; Levêque, J; Levin, D; Levinson, L J; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, S; Li, X; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lichtnecker, M; Lie, K; Liebig, W; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Linnemann, J T; Lipeles, E; Lipniacka, A; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, C; Liu, D; Liu, H; Liu, J B; Liu, L; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, R E; Lopes, L; Lopez Mateos, D; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lo Sterzo, F; Losty, M J; Lou, X; Lounis, A; Loureiro, K F; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Ludwig, A; Ludwig, D; Ludwig, I; Ludwig, J; Luehring, F; Luijckx, G; Lukas, W; Luminari, L; Lund, E; Lund-Jensen, B; Lundberg, B; Lundberg, J; Lundberg, O; Lundquist, J; Lungwitz, M; Lynn, D; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Maček, B; Machado Miguens, J; Mackeprang, R; Madaras, R J; Maddocks, H J; Mader, W F; Maenner, R; Maeno, T; Mättig, P; Mättig, S; Magnoni, L; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Mahout, G; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Malecki, P; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V; Malyukov, S; Mameghani, R; Mamuzic, J; Manabe, A; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Mangeard, P S; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mapelli, A; Mapelli, L; March, L; Marchand, J F; Marchese, F; Marchiori, G; Marcisovsky, M; Marino, C P; Marroquim, F; Marshall, Z; Martens, F K; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, J P; Martin, T A; Martin, V J; Martin dit Latour, B; Martin-Haugh, S; Martinez, M; Martinez Outschoorn, V; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massaro, G; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Matricon, P; Matsunaga, H; Matsushita, T; Mattravers, C; Maurer, J; Maxfield, S J; Mayne, A; Mazini, R; Mazur, M; Mazzaferro, L; Mazzanti, M; Mc Donald, J; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Mechtel, M; Medinnis, M; Meera-Lebbai, R; Meguro, T; Mehdiyev, R; Mehlhase, S; Mehta, A; Meier, K; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mendoza Navas, L; Meng, Z; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer, J; Meyer, T C; Michal, S; Micu, L; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Miller, R J; Mills, W J; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Miñano Moya, M; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitrevski, J; Mitsou, V A; Mitsui, S; Miyagawa, P S; Mjörnmark, J U; Moa, T; Moeller, V; Mönig, K; Möser, N; Mohapatra, S; Mohr, W; Moles-Valls, R; Molfetas, A; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Moorhead, G F; Mora Herrera, C; Moraes, A; Morange, N; Morel, J; Morello, G; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Mueller, F; Mueller, J; Mueller, K; Müller, T A; Mueller, T; Muenstermann, D; Munwes, Y; Murray, W J; Mussche, I; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Nanava, G; Napier, A; Narayan, R; Nash, M; Nattermann, T; Naumann, T; Navarro, G; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neusiedl, A; Neves, R M; Nevski, P; Newcomer, F M; Newman, P R; Nguyen Thi Hong, V; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Niedercorn, F; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsen, H; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Norton, P R; Novakova, J; Nozaki, M; Nozka, L; Nugent, I M; Nuncio-Quiroz, A-E; Nunes Hanninger, G; Nunnemann, T; Nurse, E; O'Brien, B J; O'Neil, D C; O'Shea, V; Oakes, L B; Oakham, F G; Oberlack, H; Ocariz, J; Ochi, A; Oda, S; Odaka, S; Odier, J; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohshima, T; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira, M; Oliveira Damazio, D; Oliver Garcia, E; Olivito, D; Olszewski, A; Olszowska, J; Onderwaater, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Orlov, I; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Osuna, C; Otero y Garzon, G; Ottersbach, J P; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Ouyang, Q; Ovcharova, A; Owen, M; Owen, S; Ozcan, V E; Ozturk, N; Pacheco Pages, A; Padilla Aranda, C; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Paleari, C P; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Papadelis, A; Papadopoulou, Th D; Paramonov, A; Paredes Hernandez, D; Park, W; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pashapour, S; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N; Pater, J R; Patricelli, S; Pauly, T; Pecsy, M; Pedraza Lopez, S; Pedraza Morales, M I; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penson, A; Penwell, J; Perantoni, M; Perez, K; Perez Cavalcanti, T; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Perrodo, P; Peshekhonov, V D; Peters, K; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petschull, D; Petteni, M; Pezoa, R; Phan, A; Phillips, P W; Piacquadio, G; Picazio, A; Piccaro, E; Piccinini, M; Piec, S M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pinto, B; Pizio, C; Plamondon, M; Pleier, M-A; Plotnikova, E; Poblaguev, A; Poddar, S; Podlyski, F; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polini, A; Poll, J; Polychronakos, V; Pomeroy, D; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospelov, G E; Pospisil, S; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Prabhu, R; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Pretzl, K; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Prokofiev, K; Prokoshin, F; Protopopescu, S; Proudfoot, J; Prudent, X; Przybycien, M; Przysiezniak, H; Psoroulas, S; Ptacek, E; Pueschel, E; Purdham, J; Purohit, M; Puzo, P; Pylypchenko, Y; Qian, J; Quadt, A; Quarrie, D R; Quayle, W B; Quinonez, F; Raas, M; Radeka, V; Radescu, V; Radloff, P; Rador, T; Ragusa, F; Rahal, G; Rahimi, A M; Rahm, D; Rajagopalan, S; Rammensee, M; Rammes, M; Randle-Conde, A S; Randrianarivony, K; Rauscher, F; Rave, T C; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Reinherz-Aronis, E; Reinsch, A; Reisinger, I; Rembser, C; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Resende, B; Reznicek, P; Rezvani, R; Richter, R; Richter-Was, E; Ridel, M; Rijpstra, M; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Rios, R R; Riu, I; Rivoltella, G; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Rocha de Lima, J G; Roda, C; Roda Dos Santos, D; Roe, A; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romeo, G; Romero Adam, E; Rompotis, N; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, A; Rose, M; Rosenbaum, G A; Rosenberg, E I; Rosendahl, P L; Rosenthal, O; Rosselet, L; Rossetti, V; Rossi, E; Rossi, L P; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Ruckstuhl, N; Rud, V I; Rudolph, C; Rudolph, G; Rühr, F; Ruiz-Martinez, A; Rumyantsev, L; Rurikova, Z; Rusakovich, N A; Rutherfoord, J P; Ruzicka, P; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Samset, B H; Sanchez, A; Sanchez Martinez, V; Sandaker, H; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santamarina Rios, C; Santoni, C; Santonico, R; Santos, H; Saraiva, J G; Sarangi, T; Sarkisyan-Grinbaum, E; Sarri, F; Sartisohn, G; Sasaki, O; Sasaki, Y; Sasao, N; Satsounkevitch, I; Sauvage, G; Sauvan, E; Sauvan, J B; Savard, P; Savinov, V; Savu, D O; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schaefer, D; Schäfer, U; Schaelicke, A; Schaepe, S; Schaetzel, S; Schaffer, A C; Schaile, D; Schamberger, R D; Schamov, A G; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schmitz, M; Schneider, B; Schnoor, U; Schoeffel, L; Schoening, A; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schroeder, C; Schroer, N; Schultens, M J; Schultes, J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwierz, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciolla, G; Scott, W G; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellden, B; Sellers, G; Seman, M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Seuster, R; Severini, H; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaw, K; Sherman, D; Sherwood, P; Shimizu, S; Shimojima, M; Shin, T; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinnari, L A; Skottowe, H P; Skovpen, K; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, B C; Smith, D; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snow, S W; Snow, J; Snyder, S; Sobie, R; Sodomka, J; Soffer, A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Solovyev, V; Soni, N; Sopko, V; Sopko, B; Sosebee, M; Soualah, R; Soukharev, A; Spagnolo, S; Spanò, F; Spighi, R; Spigo, G; Spiwoks, R; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Staude, A; Stavina, P; Steele, G; Steinbach, P; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoerig, K; Stoicea, G; Stonjek, S; Strachota, P; Stradling, A R; Straessner, A; Strandberg, J; Strandberg, S; Strandlie, A; Strang, M; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Strong, J A; Stroynowski, R; Stugu, B; Stumer, I; Stupak, J; Sturm, P; Styles, N A; Soh, D A; Su, D; Subramania, Hs; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Suzuki, Y; Svatos, M; Swedish, S; Sykora, I; Sykora, T; Sánchez, J; Ta, D; Tackmann, K; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A; Tamsett, M C; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tani, K; Tannoury, N; Tapprogge, S; Tardif, D; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tassi, E; Tatarkhanov, M; Tayalati, Y; Taylor, C; Taylor, F E; Taylor, G N; Taylor, W; Teinturier, M; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thoma, S; Thomas, J P; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tic, T; Tikhomirov, V O; Tikhonov, Y A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tomoto, M; Tompkins, L; Toms, K; Tonoyan, A; Topfel, C; Topilin, N D; Torchiani, I; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Triplett, N; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiakiris, M; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsung, J-W; Tsuno, S; Tsybychev, D; Tua, A; Tudorache, A; Tudorache, V; Tuggle, J M; Turala, M; Turecek, D; Turk Cakir, I; Turlay, E; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Tzanakos, G; Uchida, K; Ueda, I; Ueno, R; Ugland, M; Uhlenbrock, M; Uhrmacher, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Unno, Y; Urbaniec, D; Urquijo, P; Usai, G; Uslenghi, M; Vacavant, L; Vacek, V; Vachon, B; Vahsen, S; Valenta, J; Valentinetti, S; Valero, A; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Poel, E; van der Ster, D; van Eldik, N; van Gemmeren, P; van Vulpen, I; Vanadia, M; Vandelli, W; Vaniachine, A; Vankov, P; Vannucci, F; Vari, R; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vassilakopoulos, V I; Vazeille, F; Vazquez Schroeder, T; Vegni, G; Veillet, J J; Veloso, F; Veness, R; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinek, E; Vinogradov, V B; Virchaux, M; Virzi, J; Vitells, O; Viti, M; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vokac, P; Volpi, G; Volpi, M; Volpini, G; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorwerk, V; Vos, M; Voss, R; Voss, T T; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Wagner, W; Wagner, P; Wahlen, H; Wahrmund, S; Wakabayashi, J; Walch, S; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, H; Wang, H; Wang, J; Wang, J; Wang, R; Wang, S M; Wang, T; Warburton, A; Ward, C P; Warsinsky, M; Washbrook, A; Wasicki, C; Watanabe, I; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, A T; Waugh, B M; Weber, M S; Weber, P; Webster, J S; Weidberg, A R; Weigell, P; Weingarten, J; Weiser, C; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Werth, M; Wessels, M; Wetter, J; Weydert, C; Whalen, K; Wheeler-Ellis, S J; White, A; White, M J; White, S; Whitehead, S R; Whiteson, D; Whittington, D; Wicek, F; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilhelm, I; Wilkens, H G; Will, J Z; Williams, E; Williams, H H; Willis, W; Willocq, S; Wilson, J A; Wilson, M G; Wilson, A; Wingerter-Seez, I; Winkelmann, S; Winklmeier, F; Wittgen, M; Wollstadt, S J; Wolter, M W; Wolters, H; Wong, W C; Wooden, G; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wraight, K; Wright, M; Wrona, B; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wynne, B M; Xella, S; Xiao, M; Xie, S; Xu, C; Xu, D; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamazaki, T; Yamazaki, Y; Yan, Z; Yang, H; Yang, U K; Yang, Y; Yang, Z; Yanush, S; Yao, L; Yao, Y; Yasu, Y; Ybeles Smit, G V; Ye, J; Ye, S; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J; Youssef, S; Yu, D; Yu, J; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaidan, R; Zaitsev, A M; Zajacova, Z; Zanello, L; Zanzi, D; Zaytsev, A; Zeitnitz, C; Zeman, M; Zemla, A; Zendler, C; Zenin, O; Ženiš, T; Zinonos, Z; Zenz, S; Zerwas, D; Zevi della Porta, G; Zhang, D; Zhang, H; Zhang, J; Zhang, X; Zhang, Z; Zhao, L; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, N; Zhou, Y; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhuravlov, V; Zieminska, D; Zimin, N I; Zimmermann, R; Zimmermann, S; Zimmermann, S; Ziolkowski, M; Zitoun, R; Živković, L; Zmouchko, V V; Zobernig, G; Zoccoli, A; zur Nedden, M; Zutshi, V; Zwalinski, L
2013-01-11
The ATLAS experiment has observed 1995 Z boson candidates in data corresponding to 0.15 nb(-1) of integrated luminosity obtained in the 2011 LHC Pb+Pb run at sqrt[s(NN)]=2.76 TeV. The Z bosons are reconstructed via dielectron and dimuon decay channels, with a background contamination of less than 3%. Results from the two channels are consistent and are combined. Within the statistical and systematic uncertainties, the per-event Z boson yield is proportional to the number of binary collisions estimated by the Glauber model. The elliptic anisotropy of the azimuthal distribution of the Z boson with respect to the event plane is found to be consistent with zero.
Interactions of neanderthals and modern humans: what can be inferred from mitochondrial DNA?
Cyran, Krzysztof A; Kimmel, Marek
2005-07-01
This paper reviews the state-of-the-art knowledge concerning the relationship between Neanderthals and Upper Paleolithic modern humans. The branching-process method is applied to infer the upper limit of hypothetical Neanderthal admixture, consistent with the evidence based on mitochon- drial DNA sequences of contemporary modern humans, as well as Neanderthal and early modern European H. sapiens fossils. As a result, a maximum value of 15% admixture is obtained. This estimate is discussed in the context of its consequences for the two competing theories of modern human origin.
Utilization of high-frequency Rayleigh waves in near-surface geophysics
Xia, J.; Miller, R.D.; Park, C.B.; Ivanov, J.; Tian, G.; Chen, C.
2004-01-01
Shear-wave velocities can be derived from inverting the dispersive phase velocity of the surface. The multichannel analysis of surface waves (MASW) is one technique for inverting high-frequency Rayleigh waves. The process includes acquisition of high-frequency broad-band Rayleigh waves, efficient and accurate algorithms designed to extract Rayleigh-wave dispersion curves from Rayleigh waves, and stable and efficient inversion algorithms to obtain near-surface S-wave velocity profiles. MASW estimates S-wave velocity from multichannel vertical compoent data and consists of data acquisition, dispersion-curve picking, and inversion.
Mathematic modeling of the method of measurement relative dielectric permeability
NASA Astrophysics Data System (ADS)
Plotnikova, I. V.; Chicherina, N. V.; Stepanov, A. B.
2018-05-01
The method of measuring relative permittivity’s and the position of the interface between layers of a liquid medium is considered in the article. An electric capacitor is a system consisting of two conductors that are separated by a dielectric layer. It is mathematically proven that at any given time it is possible to obtain the values of the relative permittivity in the layers of the liquid medium and to determine the level of the interface between the layers of the two-layer liquid. The estimation of measurement errors is made.
MATSurv: multisensor air traffic surveillance system
NASA Astrophysics Data System (ADS)
Yeddanapudi, Murali; Bar-Shalom, Yaakov; Pattipati, Krishna R.; Gassner, Richard R.
1995-09-01
This paper deals with the design and implementation of MATSurv 1--an experimental Multisensor Air Traffic Surveillance system. The proposed system consists of a Kalman filter based state estimator used in conjunction with a 2D sliding window assignment algorithm. Real data from two FAA radars is used to evaluate the performance of this algorithm. The results indicate that the proposed algorithm provides a superior classification of the measurements into tracks (i.e., the most likely aircraft trajectories) when compared to the aircraft trajectories obtained using the measurement IDs (squawk or IFF code).
Mirabueno, Ma. Hannah T.; Okuno, Mitsuru; Nakamura, Toshio; Newhall, Christopher G.; Kobayashi, Tetsuo
2006-01-01
This paper presents the AMS 14C dates of paleosols intercalated with tephra layers in the vicinity of Mayon Volcano, southern Luzon, Philippines. the obtained 14C dates are almost consistent with the stratigraphy of the Mayon tephra group. On the basis of calibrated 14C age of soil layer directly overlying the lowest ash layer, the oldest eruptive event must have taken place shortly before 20 cal kyr BP. This age is younger than the previous estimates for Mayon.
Numerical solution for weight reduction model due to health campaigns in Spain
NASA Astrophysics Data System (ADS)
Mohammed, Maha A.; Noor, Noor Fadiya Mohd; Siri, Zailan; Ibrahim, Adriana Irawati Nur
2015-10-01
Transition model between three subpopulations based on Body Mass Index of Valencia community in Spain is considered. No changes in population nutritional habits and public health strategies on weight reduction until 2030 are assumed. The system of ordinary differential equations is solved using Runge-Kutta method of higher order. The numerical results obtained are compared with the predicted values of subpopulation proportion based on statistical estimation in 2013, 2015 and 2030. Relative approximate error is calculated. The consistency of the Runge-Kutta method in solving the model is discussed.
NASA Astrophysics Data System (ADS)
Dondurur, Derman
2005-11-01
The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.
Taylor, Zeike A; Kirk, Thomas B; Miller, Karol
2007-10-01
The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.
Estimating phonation threshold pressure.
Fisher, K V; Swank, P R
1997-10-01
Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.
Poynton, Clare; Jenkinson, Mark; Adalsteinsson, Elfar; Sullivan, Edith V.; Pfefferbaum, Adolf; Wells, William
2015-01-01
There is increasing evidence that iron deposition occurs in specific regions of the brain in normal aging and neurodegenerative disorders such as Parkinson's, Huntington's, and Alzheimer's disease. Iron deposition changes the magnetic susceptibility of tissue, which alters the MR signal phase, and allows estimation of susceptibility differences using quantitative susceptibility mapping (QSM). We present a method for quantifying susceptibility by inversion of a perturbation model, or ‘QSIP’. The perturbation model relates phase to susceptibility using a kernel calculated in the spatial domain, in contrast to previous Fourier-based techniques. A tissue/air susceptibility atlas is used to estimate B0 inhomogeneity. QSIP estimates in young and elderly subjects are compared to postmortem iron estimates, maps of the Field-Dependent Relaxation Rate Increase (FDRI), and the L1-QSM method. Results for both groups showed excellent agreement with published postmortem data and in-vivo FDRI: statistically significant Spearman correlations ranging from Rho = 0.905 to Rho = 1.00 were obtained. QSIP also showed improvement over FDRI and L1-QSM: reduced variance in susceptibility estimates and statistically significant group differences were detected in striatal and brainstem nuclei, consistent with age-dependent iron accumulation in these regions. PMID:25248179
Using state-issued identification cards for obesity tracking.
Morris, Daniel S; Schubert, Stacey S; Ngo, Duyen L; Rubado, Dan J; Main, Eric; Douglas, Jae P
2015-01-01
Obesity prevention has emerged as one of public health's top priorities. Public health agencies need reliable data on population health status to guide prevention efforts. Existing survey data sources provide county-level estimates; obtaining sub-county estimates from survey data can be prohibitively expensive. State-issued identification cards are an alternate data source for community-level obesity estimates. We computed body mass index for 3.2 million adult Oregonians who were issued a driver license or identification card between 2003 and 2010. Statewide estimates of obesity prevalence and average body mass index were compared to the Oregon Behavioral Risk Factor Surveillance System (BRFSS). After geocoding addresses we calculated average adult body mass index for every census tract and block group in the state. Sub-county estimates reveal striking patterns in the population's weight status. Annual obesity prevalence estimates from identification cards averaged 18% lower than the BRFSS for men and 31% lower for women. Body mass index estimates averaged 2% lower than the BRFSS for men and 5% lower for women. Identification card records are a promising data source to augment tracking of obesity. People do tend to misrepresent their weight, but the consistent bias does not obscure patterns and trends. Large numbers of records allow for stable estimates for small geographic areas. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. All rights reserved.
Monolayer-crystal streptavidin support films provide an internal standard of cryo-EM image quality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Bong-Gyoon; Watson, Zoe; Cate, Jamie H. D.
Analysis of images of biotinylated Escherichia coli 70S ribosome particles, bound to streptavidin affinity grids, demonstrates that the image-quality of particles can be predicted by the image-quality of the monolayer crystalline support film. Also, the quality of the Thon rings is a good predictor of the image-quality of particles, but only when images of the streptavidin crystals extend to relatively high resolution. When the estimated resolution of streptavidin was 5 Å or worse, for example, the ribosomal density map obtained from 22,697 particles went to only 9.5 Å, while the resolution of the map reached 4.0 Å for the samemore » number of particles, when the estimated resolution of streptavidin crystal was 4 Å or better. It thus is easy to tell which images in a data set ought to be retained for further work, based on the highest resolution seen for Bragg peaks in the computed Fourier transforms of the streptavidin component. The refined density map obtained from 57,826 particles obtained in this way extended to 3.6 Å, a marked improvement over the value of 3.9 Å obtained previously from a subset of 52,433 particles obtained from the same initial data set of 101,213 particles after 3-D classification. These results are consistent with the hypothesis that interaction with the air-water interface can damage particles when the sample becomes too thin. Finally, streptavidin monolayer crystals appear to provide a good indication of when that is the case.« less
A global logrank test for adaptive treatment strategies based on observational studies.
Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara
2014-02-28
In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.
The VAST Survey - IV. A wide brown dwarf companion to the A3V star ζ Delphini
NASA Astrophysics Data System (ADS)
De Rosa, R. J.; Patience, J.; Ward-Duong, K.; Vigan, A.; Marois, C.; Song, I.; Macintosh, B.; Graham, J. R.; Doyon, R.; Bessell, M. S.; Lai, O.; McCarthy, D. W.; Kulesa, C.
2014-12-01
We report the discovery of a wide comoving substellar companion to the nearby (D = 67.5 ± 1.1 pc) A3V star ζ Delphini based on imaging and follow-up spectroscopic observations obtained during the course of our Volume-limited A-Star (VAST) multiplicity survey. ζ Del was observed over a five-year baseline with adaptive optics, revealing the presence of a previously unresolved companion with a proper motion consistent with that of the A-type primary. The age of the ζ Del system was estimated as 525 ± 125 Myr based on the position of the primary on the colour-magnitude and temperature-luminosity diagrams. Using intermediate-resolution near-infrared spectroscopy, the spectrum of ζ Del B is shown to be consistent with a mid-L dwarf (L5 ± 2), at a temperature of 1650 ± 200 K. Combining the measured near-infrared magnitude of ζ Del B with the estimated temperature leads to a model-dependent mass estimate of 50 ± 15 MJup, corresponding to a mass ratio of q = 0.019 ± 0.006. At a projected separation of 910 ± 14 au, ζ Del B is among the most widely separated and extreme-mass ratio substellar companions to a main-sequence star resolved to date, providing a rare empirical constraint of the formation of low-mass ratio companions at extremely wide separations.
Post examination of copper ER sensors exposed to bentonite
NASA Astrophysics Data System (ADS)
Kosec, Tadeja; Kranjc, Andrej; Rosborg, Bo; Legat, Andraž
2015-04-01
Copper corrosion in saline solutions under oxic conditions is one of concerns for the early periods of disposal of spent nuclear fuel in deep geological repositories. The main aim of the study was to investigate the corrosion behaviour of copper during this oxic period. The corrosion rate of pure copper was measured by means of thin electrical resistance (ER) sensors that were placed in a test package containing an oxic bentonite/saline groundwater environment at room temperature for a period of four years. Additionally, the corrosion rate was monitored by electrochemical impedance spectroscopy (EIS) measurements that were performed on the same ER sensors. By the end of the exposure period the corrosion rate, as estimated by both methods, had dropped to approximately 1.0 μm/year. The corrosion rate was also estimated by the examination of metallographic cross sections. The post examination tests which were used to determine the type and extent of corrosion products included different spectroscopic techniques (XRD and Raman analysis). It was confirmed that the corrosion rate obtained by means of physical (ER) and electrochemical techniques (EIS) was consistent with that estimated from the metallographic cross section analysis. The corrosion products consisted of cuprous oxide and paratacamite, which was very abundant. From the types of attack it can be concluded that the investigated samples of copper in bentonite underwent uneven general corrosion.
NASA Technical Reports Server (NTRS)
Cheng, Yen-Ben; Middleton, Elizabeth M.; Zhang, Qingyuan; Huemmrich, Karl F.; Campbell, Petya K. E.; Corp, Lawrence A.; Cook, Bruce D.; Kustas, William P.; Daughtry, Criag S.
2013-01-01
The utilization of remotely sensed observations for light use efficiency (LUE) and tower-based gross primary production (GPP) estimates was studied in a USDA cornfield. Nadir hyperspectral reflectance measurements were acquired at canopy level during a collaborative field campaign conducted in four growing seasons. The Photochemical Reflectance Index (PRI) and solar induced chlorophyll fluorescence (SIF), were derived. SIF retrievals were accomplished in the two telluric atmospheric oxygen absorption features centered at 688 nm (O2-B) and 760 nm (O2-A). The PRI and SIF were examined in conjunction with GPP and LUE determined by flux tower-based measurements. All of these fluxes, environmental variables, and the PRI and SIF exhibited diurnal as well as day-to-day dynamics across the four growing seasons. Consistent with previous studies, the PRI was shown to be related to LUE (r2 = 0.54 with a logarithm fit), but the relationship varied each year. By combining the PRI and SIF in a linear regression model, stronger performances for GPP estimation were obtained. The strongest relationship (r2 = 0.80, RMSE = 0.186 mg CO2/m2/s) was achieved when using the PRI and SIF retrievals at 688 nm. Cross-validation approaches were utilized to demonstrate the robustness and consistency of the performance. This study highlights a GPP retrieval method based entirely on hyperspectral remote sensing observations.
Wave Breaking Dissipation in Fetch-Limited Seas
NASA Astrophysics Data System (ADS)
Schwendeman, M.; Thomson, J. M.; Gemmrich, J.
2012-12-01
Breaking waves on the ocean surface control wave growth and enhance air-sea interaction, yet field measurements of breaking are limited. A promising technique for field measurements of wave breaking uses the breaking crest length distribution Λ(c), introduced by Phillips (1985). However, calculating dynamic quantities from Λ(c) requires knowledge of the breaking strength parameter, b. Estimates of a b have varied over many orders of magnitude, and recent studies have attempted to model b in terms of sea state, such as wave steepness or saturation. We present comprehensive observations of breaking in fetch-limited conditions from Juan de Fuca Strait, WA. The wave evolution along fetch is explained by an observed energy budget using the radiative transfer equation (RTE), and the evolution is consistent with existing empirical fetch laws. Estimates of Λ(c) increase along fetch and are consistent with directly measured breaking rates. Using novel in situ measures of dissipation, as well as a residual term from the RTE budget, we obtain robust estimates of the wave breaking strength b. Results suggest that b decreases with wave steepness and saturation, in contrast with recent laboratory results (Drazen et al, 2008). This trend is discussed in terms of the fetch evolution and associated broadening of the equilibrium range in the wave spectra.Map of drifter tracks colored by wave height for two days in Juan de Fuca Strait, WA.
Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing
2016-12-01
The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.
Light field geometry of a Standard Plenoptic Camera.
Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljevic, Vladan; Fernández, Juan Carlos Jácome
2014-11-03
The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.
NASA Technical Reports Server (NTRS)
Lepping, R. P.; Chao, J. K.
1976-01-01
An estimated shape is presented for the surface of the flare-associated interplanetary shock of February 15-16, 1967, as seen in the ecliptic-plane cross section. The estimate is based on observations by Explorer 33 and Pioneers 6 and 7. The estimated shock normal at the Explorer 33 position is obtained by a least-squares shock parameter-fitting procedure for that satellite's data; the shock normal at the Pioneer 7 position is found by using the magnetic coplanarity theorem and magnetic-field data. The average shock speed from the sun to each spacecraft is determined along with the local speed at Explorer 33 and the relations between these speeds and the position of the initiating solar flare. The Explorer 33 shock normal is found to be severely inclined and not typical of interplanetary shocks. It is shown that the curvature of the shock surface in the ecliptic plane near the earth-Pioneer 7 region is consistent with a radius of not more than 0.4 AU.
Kaburlasos, V G; Petridis, V; Brett, P N; Baker, D A
1999-12-01
Stapedotomy is a surgical procedure aimed at the treatment of hearing impairment due to otosclerosis. The treatment consists of drilling a hole through the stapes bone in the inner ear in order to insert a prosthesis. Safety precautions require knowledge of the nonmeasurable stapes thickness. The technical goal herein has been the design of high-level controls for an intelligent mechatronics drilling tool in order to enable the estimation of stapes thickness from measurable drilling data. The goal has been met by learning a map between drilling features, hence no model of the physical system has been necessary. Learning has been achieved as explained in this paper by a scheme, namely the d-sigma Fuzzy Lattice Neurocomputing (d sigma-FLN) scheme for classification, within the framework of fuzzy lattices. The successful application of the d sigma-FLN scheme is demonstrated in estimating the thickness of a stapes bone "on-line" using drilling data obtained experimentally in the laboratory.
Stegman, Kelly J; Park, Edward J; Dechev, Nikolai
2012-07-01
The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.
Lee, Yu-Chen; Chatterton, Mary Lou; Magnus, Anne; Mohebbi, Mohammadreza; Le, Long Khanh-Dao; Mihalopoulos, Cathrine
2017-12-01
The aim of this project was to detail the costs associated with the high prevalence mental disorders (depression, anxiety-related and substance use) in Australia, using community-based, nationally representative survey data. Respondents diagnosed, within the preceding 12 months, with high prevalence mental disorders using the Confidentialised Unit Record Files of the 2007 National Survey of Mental Health and Wellbeing were analysed. The use of healthcare resources (hospitalisations, consultations and medications), productivity loss, income tax loss and welfare benefits were estimated. Unit costs of healthcare services were obtained from the Independent Hospital Pricing Authority, Medicare and Pharmaceutical Benefits Scheme. Labour participation rates and unemployment rates were determined from the National Survey of Mental Health and Wellbeing. Daily wage rates adjusted by age and sex were obtained from Australian Bureau of Statistics and used to estimate productivity losses. Income tax loss was estimated based on the Australian Taxation Office rates. The average cost of commonly received Government welfare benefits adjusted by age was used to estimate welfare payments. All estimates were expressed in 2013-2014 AUD and presented from multiple perspectives including public sector, individuals, private insurers, health sector and societal. The average annual treatment cost for people seeking treatment was AUD660 (public), AUD195 (individual), AUD1058 (private) and AUD845 from the health sector's perspective. The total annual healthcare cost was estimated at AUD974m, consisting of AUD700m to the public sector, AUD168m to individuals, and AUD107m to the private sector. The total annual productivity loss attributed to the population with high prevalence mental disorders was estimated at AUD11.8b, coupled with the yearly income tax loss at AUD1.23b and welfare payments at AUD12.9b. The population with high prevalence mental disorders not only incurs substantial cost to the Australian healthcare system but also large economic losses to society.
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.; Shafer, M. F.
1976-01-01
In response to the interest in airplane configuration characteristics at high angles of attack, an unpowered remotely piloted 3/8-scale F-15 airplane model was flight tested. The subsonic stability and control characteristics of this airplane model over an angle of attack range of -20 to 53 deg are documented. The remotely piloted technique for obtaining flight test data was found to provide adequate stability and control derivatives. The remotely piloted technique provided an opportunity to test the aircraft mathematical model in an angle of attack regime not previously examined in flight test. The variation of most of the derivative estimates with angle of attack was found to be consistent, particularly when the data were supplemented by uncertainty levels.
Radial widths, optical depths, and eccentricities of the Uranian rings
NASA Technical Reports Server (NTRS)
Nicholson, P. D.; Matthews, K.; Goldreich, P.
1982-01-01
Observations of the stellar occultation by the Uranian rings of 15/16 August 1980 are used to estimate radial widths and normal optical depths for segments of rings 6, 5, 4, alpha, beta, eta, gamma, and delta. Synthetic occultation profiles are generated to match the observed light curves. A review of published data confirms the existence of width-radius relations for rings alpha and beta, and indicates that the optical depths of these two rings vary inversely with their radial widths. Masses are obtained for rings alpha and beta, on the assumption that differential precession is prevented by their self-gravity. A quantitative comparison of seven epsilon-ring occultation profiles obtained over a period of 3.4 yr reveals a consistent structure, which may reflect the presence of unresolved gaps and subrings.
Nanocluster metal films as thermoelectric material for radioisotope mini battery unit
NASA Astrophysics Data System (ADS)
Borisyuk, P. V.; Krasavin, A. V.; Tkalya, E. V.; Lebedinskii, Yu. Yu.; Vasiliev, O. S.; Yakovlev, V. P.; Kozlova, T. I.; Fetisov, V. V.
2016-10-01
The paper is devoted to studying the thermoelectric and structural properties of films based on metal nanoclusters (Au, Pd, Pt). The experimental results of the study of single nanoclusters' tunneling conductance obtained with scanning tunneling spectroscopy are presented. The obtained data allowed us to evaluate the thermoelectric power of thin film consisting of densely packed individual nanoclusters. It is shown that such thin films can operate as highly efficient thermoelectric materials. A scheme of miniature thermoelectric radioisotope power source based on the thorium-228 isotope is proposed. The efficiency of the radioisotope battery using thermoelectric converters based on nanocluster metal films is shown to reach values up to 1.3%. The estimated characteristics of the device are comparable with the parameters of up-to-date radioisotope batteries based on nickel-63.
[Chromosome as a chronicler: Genetic dating, historical events, and DNA-genealogic temptation].
Balanovsky, O P; Zaporozhchenko, V V
2016-07-01
Nonrecombinant portions of the genome, Y chromosome and mitochondrial DNA, are widely used for research on human population gene pools and reconstruction of their history. These systems allow the genetic dating of clusters of emerging haplotypes. The main method for age estimations is ρ statistics, which is an average number of mutations from founder haplotype to all modern-day haplotypes. A researcher can estimate the age of the cluster by multiplying this number by the mutation rate. The second method of estimation, ASD, is used for STR haplotypes of the Y chromosome and is based on the squared difference in the number of repeats. In addition to the methods of calculation, methods of Bayesian modeling assume a new significance. They have greater computational cost and complexity, but they allow obtaining an a posteriori distribution of the value of interest that is the most consistent with experimental data. The mutation rate must be known for both calculation methods and modeling methods. It can be determined either during the analysis of lineages or by providing calibration points based on populations with known formation time. These two approaches resulted in rate estimations for Y-chromosomal STR haplotypes with threefold difference. This contradiction was only recently refuted through the use of sequence data for the complete Y chromosome; “whole-genomic” rates of single nucleotide mutations obtained by both methods are mutually consistent and mark the area of application for different rates of STR markers. An issue even more crucial than that of the rates is correlation of the reconstructed history of the haplogroup (a cluster of haplotypes) and the history of the population. Although the need for distinguishing “lineage history” and “population history” arose in the earliest days of phylogeographic research, reconstructing the population history using genetic dating requires a number of methods and conditions. It is known that population history events leave distinct traces in the history of haplogroups only under certain demographic conditions. Direct identification of national history with the history of its occurring haplogroups is inappropriate and is avoided in population genetic studies, although because of its simplicity and attractiveness it is a constant temptation for researchers. An example of DNA genealogy, an amateur field that went beyond the borders of even citizen science and is consistently using the principle of equating haplogroup with lineage and population, which leads to absurd results (e.g., Eurasia as an origin of humankind), can serve as a warning against a simplified approach for interpretation of genetic dating results.
Swarzenski, Peter; Reich, Chris; Rudnick, David
2009-01-01
Estimates of submarine ground-water discharge (SGD) into Florida Bay remain one of the least understood components of a regional water balance. To quantify the magnitude and seasonality of SGD into upper Florida Bay, research activities included the use of the natural geochemical tracer, 222Rn, to examine potential SGD hotspots (222Rn surveys) and to quantify the total (saline + fresh water component) SGD rates at select sites (222Rn time-series). To obtain a synoptic map of the 222Rn distribution within our study site in Florida Bay, we set up a flow-through system on a small boat that consisted of a Differential Global Positioning System, a calibrated YSI, Inc CTD sensor with a sampling rate of 0.5 min, and a submersible pump (z = 0.5 m) that continuously fed water into an air/water exchanger that was plumbed simultaneously into four RAD7 222Rn air monitors. To obtain local advective ground-water flux estimates, 222Rn time-series experiments were deployed at strategic positions across hydrologic and geologic gradients within our study site. These time-series stations consisted of a submersible pump, a Solinist DIVER (to record continuous CTD parameters) and two RAD7 222Rn air monitors plumbed into an air/water exchanger. Repeat time-series 222Rn measurements were conducted for 3-4 days across several tidal excursions. Radon was also measured in the air during each sampling campaign by a dedicated RAD7. We obtained ground-water discharge information by calculating a 222Rn mass balance that accounted for lateral and horizontal exchange, as well as an appropriate ground-water 222Rn end member activity. Another research component utilized marine continuous resistivity profiling (CRP) surveys to examine the subsurface salinity structure within Florida Bay sediments. This system consisted of an AGI SuperSting 8 channel receiver attached to a streamer cable that had two current (A,B) electrodes and nine potential electrodes that were spaced 10 m apart. A separate DGPS continuously sent position information to the SuperSting. Results indicate that the 222Rn maps provide a useful gauge of relative ground-water discharge into upper Florida Bay. The 222Rn time-series measurements provide a reasonable estimate of site- specific total (saline and fresh) ground-water discharge (mean = 12.5+-11.8 cm d-1), while the saline nature of the shallow ground-water at our study site, as evidenced by CPR results, indicates that most of this discharge must be recycled sea water. The CRP data show some interesting trends that appear to be consistent with subsurface geologic and hydrologic characterization. For example, some of the highest resistivity (electrical conductivity-1) values were recorded where one would expect a slight subsurface freshening (for example bayside Key Largo, or below the C111 canal).
Butler, Ainslie J; Thomas, M Kate; Pintar, Katarina D M
2015-04-01
Enteric illness contributes to a significant burden of illness in Canada and globally. Understanding its sources is a critical step in identifying and preventing health risks. Expert elicitation is a powerful tool, used previously, to obtain information about enteric illness source attribution where information is difficult or expensive to obtain. Thirty-one experts estimated transmission of 28 pathogens via major transmission routes (foodborne, waterborne, animal contact, person-to-person, and other) at the point of consumption. The elicitation consisted of a (snowball) recruitment phase; administration of a pre-survey to collect background information, an introductory webinar, an elicitation survey, a 1-day discussion, survey readministration, and a feedback exercise, and surveys were administered online. Experts were prompted to quantify changes in contamination at the point of entry into the kitchen versus point of consumption. Estimates were combined via triangular probability distributions, and medians and 90% credible-interval estimates were produced. Transmission was attributed primarily to food for Bacillus cereus, Clostridium perfringens, Cyclospora cayetanensis, Trichinella spp., all three Vibrio spp. categories explored, and Yersinia enterocolitica. Multisource pathogens (e.g., transmitted commonly through both water and food) such as Campylobacter spp., four Escherichia coli categories, Listeria monocytogenes, Salmonella spp., and Staphylococcus aureus were also estimated as mostly foodborne. Water was the primary pathway for Giardia spp. and Cryptosporidium spp., and person-to-person transmission dominated for six enteric viruses and Shigella spp. Consideration of the point of attribution highlighted the importance of food handling and cross-contamination in the transmission pathway. This study provides source attribution estimates of enteric illness for Canada, considering all possible transmission routes. Further research is necessary to improve our understanding of poorly characterized pathogens such as sapovirus and E. coli subgroups in Canada.
Bourgkard, Eve; Wild, Pascal; Gonzalez, Maria; Févotte, Joëlle; Penven, Emmanuelle; Paris, Christophe
2013-12-01
To describe the performance of a lifelong task-based questionnaire (TBQ) in estimating exposures compared with other approaches in the context of a case-control study. A sample of 93 subjects was randomly selected from a lung cancer case-control study corresponding to 497 jobs. For each job, exposure assessments for asbestos and polycyclic aromatic hydrocarbons (PAHs) were obtained by expertise (TBQ expertise) and by algorithm using the TBQ (TBQ algorithm) as well as by expert appraisals based on all available occupational data (REFERENCE expertise) considered to be the gold standard. Additionally, a Job Exposure Matrix (JEM)-based evaluation for asbestos was also obtained. On the 497 jobs, the various evaluations were contrasted using Cohen's κ coefficient of agreement. Additionally, on the total case-control population, the asbestos dose-response relationship based on the TBQ algorithm was compared with the JEM-based assessment. Regarding asbestos, the TBQ-exposure estimates agreed well with the REFERENCE estimate (TBQ expertise: level-weighted κ (lwk)=0.68; TBQ algorithm: lwk=0.61) but less so with the JEM estimate (TBQ expertise: lwk=0.31; TBQ algorithm: lwk=0.26). Regarding PAHs, the agreements between REFERENCE expertise and TBQ were less good (TBQ expertise: lwk=0.43; TBQ algorithm: lwk=0.36). In the case-control study analysis, the dose-response relationship between lung cancer and cumulative asbestos based on the JEM is less steep than with the TBQ-algorithm exposure assessment and statistically non-significant. Asbestos-exposure estimates based on the TBQ were consistent with the REFERENCE expertise and yielded a steeper dose-response relationship than the JEM. For PAHs, results were less clear.
Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.
2017-01-01
Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892
A MAGNIFIED GLANCE INTO THE DARK SECTOR: PROBING COSMOLOGICAL MODELS WITH STRONG LENSING IN A1689
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magaña, Juan; Motta, V.; Cárdenas, Victor H.
2015-11-01
In this paper we constrain four alternative models to the late cosmic acceleration in the universe: Chevallier–Polarski–Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell 1689 are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained for the SL datamore » are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak, but they are similar to the BAO, SN Ia, and CMB estimations. We also compute the figure of merit as a tool to quantify the goodness of fit of the data. Our results suggest that the SL method provides statistically significant constraints on the CPL parameters but is weak for those of the other models. Finally, we show that the use of the SL measurements in galaxy clusters is a promising and powerful technique to constrain cosmological models. The advantage of this method is that cosmological parameters are estimated by modeling the SL features for each underlying cosmology. These estimations could be further improved by SL constraints coming from other galaxy clusters.« less
Further studies with data collected by NASA's airborne Doppler lidar in Oklahoma in 1981
NASA Technical Reports Server (NTRS)
Bluestein, H. B.; Mccaul, E. W., Jr.
1986-01-01
Continued study of the lidar data collected in 1981 has resulted in significant new improvements in the analysis techniques reported by Bluestein et al. (1985) and McCaul (1985). Through comparison of fore- and aft-derived scalar fields of intensity and spectral width, the self-consistency of the lidar moment estimates was assessed. Reflectivity estimates were found to be quite stable and reliable, while spectral widths were prone to become noisy if signal to noise ratio (SNR) fell below 12 dB. In addition, spectral widths contained a significant component due to radial velocity gradients in areas along gust fronts, and these components were different along the fore and aft lines of sight. Significant improvement in agreement between the fore and aft fields of spectral width was obtained by estimating the radial velocity gradient component and then removing it from the raw measured widths to yield only the turbulent portion of the contribution to width. Additional analyses showed that lidar-derived vorticity estimates were consistent with several approximate models of vorticity growth along gust front zones, and with the hypothesis that Helmholtz instability could have been responsible for vortices seen along part of the gust front of 30 June 1981. Computations of divergence transverse to axes through an isolated cumulus congestus indicated that the strongest convergence tended to lie along an axis parallel to the congestus. This and the results of other additional analyses seem to suggest that the lidar winds do indeed accurately reflected the basic features of the real wind field.
Investigating the detection of multi-homed devices independent of operating systems
2017-09-01
timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also
NASA Astrophysics Data System (ADS)
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of M-H sampler. Although it is not a common inversion technique in geophysics, it can be stated that DE algorithm is worth to get more interest for parameter estimations from potential field data in geophysics considering its good accuracy, less computational cost (in the present problem) and the fact that a well-constructed initial guess is not required to reach the global minimum.
NASA Astrophysics Data System (ADS)
Ahmad, Zeeshan; Viswanathan, Venkatasubramanian
2016-08-01
Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.
Brunetti, Bruno; Ciccioli, Andrea; Gigli, Guido; Lapi, Andrea; Misceo, Nicolaemanuele; Tanzi, Luana; Vecchio Ciprioti, Stefano
2014-08-07
The vaporization behaviour and thermodynamics of the ionic liquid 1-butyl-3-methylimidazolium bis(trifluoromethyl)sulfonylimide (BMImNTf2) were studied by combining the Knudsen Effusion Mass Loss (KEML) and Knudsen Effusion Mass Spectrometry (KEMS) techniques. KEML studies were carried out in a large temperature range (398-567) K by using effusion orifices with 0.3, 1, and 3 mm diameters. The vapor pressures so measured revealed no kinetically hindered vaporization effects and provided second-law vaporization enthalpies at the mean experimental temperatures in close agreement with literature. By exploiting the large temperature range covered, the heat capacity change associated with vaporization was estimated, resulting in a value of -66.8 J K(-1) mol(-1), much lower than that predicted from calorimetric measurements on the liquid phase and theoretical calculations on the gas phase. The conversion of the high temperature vaporization enthalpy to 298 K was discussed and the value Δ(l)(g)H(m)(298 K) = (128.6 ± 1.3) kJ mol(-1) assessed on the basis of data from literature and present work. Vapor pressure data were also processed by the third-law procedure using different estimations for the auxiliary thermal functions, and a Δ(l)(g)H(m)(298 K) consistent with the assessed value was obtained, although the overall agreement is sensitive to the accuracy of heat capacity data. KEMS measurements were carried out in the lower temperature range (393-467) K and showed that the largely prevailing ion species is BMIm(+), supporting the common view of BMImNTf2 vaporizing as individual, neutral ion pairs also under equilibrium conditions. By monitoring the mass spectrometric signal of this ion as a function of temperature, a second-law Δ(l)(g)H(m)(298 K) of 129.4 ± 7.3 kJ mol(-1) was obtained, well consistent with KEML and literature results. Finally, by combining KEML and KEMS measurements, the electron impact ionization cross section of BMIm(+) was estimated.
Gauge-free cluster variational method by maximal messages and moment matching.
Domínguez, Eduardo; Lage-Castellanos, Alejandro; Mulet, Roberto; Ricci-Tersenghi, Federico
2017-04-01
We present an implementation of the cluster variational method (CVM) as a message passing algorithm. The kind of message passing algorithm used for CVM, usually named generalized belief propagation (GBP), is a generalization of the belief propagation algorithm in the same way that CVM is a generalization of the Bethe approximation for estimating the partition function. However, the connection between fixed points of GBP and the extremal points of the CVM free energy is usually not a one-to-one correspondence because of the existence of a gauge transformation involving the GBP messages. Our contribution is twofold. First, we propose a way of defining messages (fields) in a generic CVM approximation, such that messages arrive on a given region from all its ancestors, and not only from its direct parents, as in the standard parent-to-child GBP. We call this approach maximal messages. Second, we focus on the case of binary variables, reinterpreting the messages as fields enforcing the consistency between the moments of the local (marginal) probability distributions. We provide a precise rule to enforce all consistencies, avoiding any redundancy, that would otherwise lead to a gauge transformation on the messages. This moment matching method is gauge free, i.e., it guarantees that the resulting GBP is not gauge invariant. We apply our maximal messages and moment matching GBP to obtain an analytical expression for the critical temperature of the Ising model in general dimensions at the level of plaquette CVM. The values obtained outperform Bethe estimates, and are comparable with loop corrected belief propagation equations. The method allows for a straightforward generalization to disordered systems.
NASA Astrophysics Data System (ADS)
Lamoureux, Gwenaëlle; Ildefonse, Benoı̂t; Mainprice, David
1999-11-01
Although considerable progress has been made in the study of fast-spreading, mid-ocean ridge magma chambers over the past fifteen years, the fraction of melt present in the chamber remains poorly constrained and controversial. We present new constraints obtained by modelling the seismic properties of partially molten gabbros at the ridge axis. P-wave velocities at low frequencies are calculated in the foliation/lineation reference frame using a differential effective medium technique. The model takes into account the lattice preferred orientation of the crystalline phase and the average shape of the melt phase. The structural parameters are obtained from the Oman ophiolite. The structural reference frame is given by the general trend of the gabbro foliation and the melt fraction and shape are estimated using the textures of nine upper gabbro samples. The estimated melt fraction and shape depend on the assumptions regarding which part of the observed textures represent the melt in the gabbroic mush of the magma chamber. However, we can put limits on the reasonable values for the melt fraction and shape. Our results are consistent with a melt fraction of the order of 10 to 20% in the Low-Velocity Zone (i.e. the magma chamber), which is anisotropically distributed with the melt pockets preferentially aligned parallel to the foliation and approximated by oblate ellipsoids with approximate dimensions of 4 : 4 : 1. These results are also consistent with the seismic structure of the East Pacific rise at 9°30'. The anisotropic melt distribution can, at least partially, explain the vertical velocity gradient described in the LVZ.
Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.
Bauer, Martin; Trahms, Lutz; Sander, Tilmann
2015-04-01
The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.
Gauge-free cluster variational method by maximal messages and moment matching
NASA Astrophysics Data System (ADS)
Domínguez, Eduardo; Lage-Castellanos, Alejandro; Mulet, Roberto; Ricci-Tersenghi, Federico
2017-04-01
We present an implementation of the cluster variational method (CVM) as a message passing algorithm. The kind of message passing algorithm used for CVM, usually named generalized belief propagation (GBP), is a generalization of the belief propagation algorithm in the same way that CVM is a generalization of the Bethe approximation for estimating the partition function. However, the connection between fixed points of GBP and the extremal points of the CVM free energy is usually not a one-to-one correspondence because of the existence of a gauge transformation involving the GBP messages. Our contribution is twofold. First, we propose a way of defining messages (fields) in a generic CVM approximation, such that messages arrive on a given region from all its ancestors, and not only from its direct parents, as in the standard parent-to-child GBP. We call this approach maximal messages. Second, we focus on the case of binary variables, reinterpreting the messages as fields enforcing the consistency between the moments of the local (marginal) probability distributions. We provide a precise rule to enforce all consistencies, avoiding any redundancy, that would otherwise lead to a gauge transformation on the messages. This moment matching method is gauge free, i.e., it guarantees that the resulting GBP is not gauge invariant. We apply our maximal messages and moment matching GBP to obtain an analytical expression for the critical temperature of the Ising model in general dimensions at the level of plaquette CVM. The values obtained outperform Bethe estimates, and are comparable with loop corrected belief propagation equations. The method allows for a straightforward generalization to disordered systems.
Assessing Chronology and Mantle Evolution In-Situ with CODEX
NASA Astrophysics Data System (ADS)
Anderson, F. S.; Levine, J.; Whitaker, T.
2017-12-01
Understanding lunar bombardment history is crucial to understanding the dynamic evolution of the Moon. Using an instrument called CODEX (Chemistry, Organics, and Dating Experiment) intended for in-situ dating [1-5], we have obtained Pb-Pb dates for Martian meteorites Zagami and Northwest Africa (NWA) 7034, and lunar meteorites Miller Range 05035, LaPaz Icefield 02205, and NWA 032. In conjunction with our previous Rb-Sr success, these measurments demonstrate the potential for in-situ measurements of the Moon. Some of these Pb dates are consistent with young age estimates, however, many are consistent with previous anomalously old Pb measurements for lunar and SNC meteorites. Proposed explanations for this paradox include terrestrial Pb contamination, that the SNC's are actually ancient and reset in Rb/Sr, that there are multiple isotopic reservoirs sampled by the impact process, or that multiple reservoirs are sampled during the volcanic emplacement. In the future, we plan to use CODEX to test these hypotheses by making measurements on outcrops in-situ on Mars or the Moon, avoiding terrestrial or impact mixing. If in-situ Rb-Sr and Pb-Pb measurements are not concordant, then we are likely constraining the common Pb signature, and hence mantle evolution, of the Moon or Mars. Alternatively, we are likely obtaining a robust age estimate. References: [1] F. S. Anderson et al. LPSC 1246, 2 (2017); [2] F. S. Anderson et al. LPSC 2957, 2 (2017); [3] S. Beck et al., LPSC, 3001, 2 (2017); [4] T. J. Whitaker et al. LPSC 2328, 2 (2017); [5] F. S. Anderson et al. RCMS 29, 191 (2015);
Secondary outcome analysis for data from an outcome-dependent sampling design.
Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo
2018-04-22
Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
Quiroz, Viviana; Reinero, Daniela; Hernández, Patricia; Contreras, Johanna; Vernal, Rolando; Carvajal, Paola
2017-01-01
This study aimed to develop and assess the content validity and reliability of a cognitively adapted self-report questionnaire designed for surveillance of gingivitis in adolescents. Ten predetermined self-report questions evaluating early signs and symptoms of gingivitis were preliminary assessed by a panel of clinical experts. Eight questions were selected and cognitively tested in 20 adolescents aged 12 to 18 years from Santiago de Chile. The questionnaire was then conducted and answered by 178 Chilean adolescents. Internal consistency was measured using the Cronbach's alpha and temporal stability was calculated using the Kappa-index. A reliable final self-report questionnaire consisting of 5 questions was obtained, with a total Cronbach's alpha of 0.73 and a Kappa-index ranging from 0.41 to 0.77 between the different questions. The proposed questionnaire is reliable, with an acceptable internal consistency and a temporal stability from moderate to substantial, and it is promising for estimating the prevalence of gingivitis in adolescents.
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Fukushima, Y.
2009-05-01
On 14 June 2008, the Iwate-Miyagi Nairiku earthquake struck northeast Japan, where active seismicity has been observed under east-west compressional stress fields. The magnitude and hypocenter depth of the earthquake are reported as Mj 7.2 and 8 km, respectively. The earthquake is considered to have occurred on a west-dipping reverse fault with a roughly north-south strike. The earthquake caused significant surface displacements, which were detected by PALSAR, a Synthetic Aperture Radar (SAR) onboard the Japanese ALOS satellite. Several pairs of PALSAR images from six different paths are available to measure the coseismic displacements. Interferometric SAR (InSAR) is useful to obtain crustal displacements in the region where coseismic displacement is not so large (less than 1 m), whereas range and azimuth offsets provide displacement measurements up to a few meters on the whole processed area. We inverted the obtained displacement data to estimate slip distribution on the fault. Since the precise location and direction of the fault are not well known, the inverse problem is nonlinear. Following the method of Fukahata and Wright (2008), we resolved the weak non-linearity based on Akaike's Bayesian Information Criterion. We first estimated slip distribution by assuming a pure dip slip. The optimal fault geometry was estimated at dip 26 and strike 203 degrees. The maximum slip is more than 8 m and most slips concentrate at shallow depths (less than 4 km). The azimuth offset data suggest non-negligible right lateral slip components, so we next estimated slip distribution without fixing the rake angle. Again, a large slip area with the maximum slip of about 8 m in the shallow depth was obtained. Such slip models contradict with our existing common sense; our results indicate that the released strain is more than 10 to the power of -3. Range and azimuth offsets computed from SAR images obtained from both ascending and descending orbits appear to be more consistent with a conjugate fault slip, which contributes to lower the stress drop possibly to a level typical to this kind of earthquakes.
Dielectric response of periodic systems from quantum Monte Carlo calculations.
Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola
2005-11-11
We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.
Localization of transient gravitational wave sources: beyond triangulation
NASA Astrophysics Data System (ADS)
Fairhurst, Stephen
2018-05-01
Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
Merritt, Michael L.
2004-01-01
Aquifers are subjected to mechanical stresses from natural, non-anthropogenic, processes such as pressure loading or mechanical forcing of the aquifer by ocean tides, earth tides, and pressure fluctuations in the atmosphere. The resulting head fluctuations are evident even in deep confined aquifers. The present study was conducted for the purpose of reviewing the research that has been done on the use of these phenomena for estimating the values of aquifer properties, and determining which of the analytical techniques might be useful for estimating hydraulic properties in the dissolved-carbonate hydrologic environment of southern Florida. Fifteen techniques are discussed in this report, of which four were applied.An analytical solution for head oscillations in a well near enough to the ocean to be influenced by ocean tides was applied to data from monitor zones in a well near Naples, Florida. The solution assumes a completely non-leaky confining unit of infinite extent. Resulting values of transmissivity are in general agreement with the results of aquifer performance tests performed by the South Florida Water Management District. There seems to be an inconsistency between results of the amplitude ratio analysis and independent estimates of loading efficiency. A more general analytical solution that takes leakage through the confining layer into account yielded estimates that were lower than those obtained using the non-leaky method, and closer to the South Florida Water Management District estimates. A numerical model with a cross-sectional grid design was applied to explore additional aspects of the problem.A relation between specific storage and the head oscillation observed in a well provided estimates of specific storage that were considered reasonable. Porosity estimates based on the specific storage estimates were consistent with values obtained from measurements on core samples. Methods are described for determining aquifer diffusivity by comparing the time-varying drawdown in an open well with periodic pressure-head oscillations in the aquifer, but the applicability of such methods might be limited in studies of the Floridan aquifer system.
Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models
NASA Astrophysics Data System (ADS)
Xia, Wei; Dai, Xiao-Xia; Feng, Yuan
2015-12-01
When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).
Reliability of Use, Abuse, and Dependence of Four Types of Inhalants in Adolescents and Young Adults
Ridenour, Ty A.; Bray, Bethany C.; Cottler, Linda B.
2007-01-01
Inhalants, as a class of drugs, consists of heterogeneous substances that include some of the most dangerous drugs on a per use basis. Research on inhalant abuse has lagged behind other drugs partly because of the need for a diagnostic instrument of different types of inhalants. This study was conducted to obtain reliability estimates for the new Substance Abuse Module DSM-IV inhalants diagnoses for four types of inhalants: aerosols, gases, nitrites, and solvents as well as different diagnostic configurations of inhalant use. Participants were 162 community sample adolescents or young adults (mean age = 20.3 years, SD = 2.4). Two-thirds of the sample was male and 83.3% was Caucasian. Kappas and intraclass correlation coefficients were computed to estimate test-retest reliabilities. Results suggested (a) abuse was more common than dependence (34.6% vs. 12.3%), (b) reliabilities of abuse criteria and diagnosis were good to excellent across subtypes, and (c) reliabilities of dependence criteria and diagnoses were poor to good across subtypes. Alternative configurations of DSM-IV criteria that were consistent with previous research on adolescents provided excellent reliabilities across subtypes of inhalants. Moreover, 11.1% of participants experienced inhalants withdrawal. PMID:17576041
Black hole mass estimates and emission-line properties of a sample of redshift z > 6.5 quasars
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Rosa, Gisella; Peterson, Bradley M.; Frank, Stephan
We present the analysis of optical and near-infrared spectra of the only four z > 6.5 quasars known to date, discovered in the UKIDSS-LAS and VISTA-VIKING surveys. Our data set consists of new Very Large Telescope/X-Shooter and Magellan/FIRE observations. These are the best optical/NIR spectroscopic data that are likely to be obtained for the z > 6.5 sample using current 6-10 m facilities. We estimate the black hole (BH) mass, the Eddington ratio, and the Si IV/C IV, C III]/C IV, and Fe II/Mg II emission-line flux ratios. We perform spectral modeling using a procedure that allows us to derivemore » a probability distribution for the continuum components and to obtain the quasar properties weighted upon the underlying distribution of continuum models. The z > 6.5 quasars show the same emission properties as their counterparts at lower redshifts. The z > 6.5 quasars host BHs with masses of ∼10{sup 9} M{sub ☉} that are accreting close to the Eddington luminosity ((log(L{sub Bol}/L{sub Edd})) = –0.4 ± 0.2), in agreement with what has been observed for a sample of 4.0 < z < 6.5 quasars. By comparing the Si IV/C IV and C III]/C IV flux ratios with the results obtained from luminosity-matched samples at z ∼ 6 and 2 ≤ z ≤ 4.5, we find no evidence of evolution of the line ratios with cosmic time. We compare the measured Fe II/Mg II flux ratios with those obtained for a sample of 4.0 < z < 6.4 sources. The two samples are analyzed using a consistent procedure. There is no evidence that the Fe II/Mg II flux ratio evolves between z = 7 and z = 4. Under the assumption that the Fe II/Mg II traces the Fe/Mg abundance ratio, this implies the presence of major episodes of chemical enrichment in the quasar hosts in the first ∼0.8 Gyr after the Big Bang.« less
NASA Astrophysics Data System (ADS)
Priyada, P.; Margret, M.; Ramar, R.; Shivaramu, Menaka, M.; Thilagam, L.; Venkataraman, B.; Raj, Baldev
2011-03-01
This paper focuses on the mild steel (MS) corrosion detection and intercomparison of results obtained by gamma scattering, gammatography, and radiography techniques. The gamma scattering non-destructive evaluation (NDE) method utilizes scattered gamma radiation for the detection of corrosion, and the scattering experimental setup is an indigenously designed automated personal computer (PC) controlled scanning system consisting of computerized numerical control (CNC) controlled six-axis source detector system and four-axis job positioning system. The system has been successfully used to quantify the magnitude of corrosion and the thickness profile of a MS plate with nonuniform corrosion, and the results are correlated with those obtained from the conventional gammatography and radiography imaging measurements. A simple and straightforward reconstruction algorithm to reconstruct the densities of the objects under investigation and an unambiguous interpretation of the signal as a function of material density at any point of the thick object being inspected is described. In this simple and straightforward method the density of the target need not be known and only the knowledge of the target material's mass attenuation coefficients (composition) for the incident and scattered energies is enough to reconstruct the density of the each voxel of the specimen being studied. The Monte Carlo (MC) numerical simulation of the phenomena is done using the Monte Carlo N-Particle Transport Code (MCNP) and the quantitative estimates of the values of signal-to-noise ratio for different percentages of MS corrosion derived from these simulations are presented and the spectra are compared with the experimental data. The gammatography experiments are carried out using the same PC controlled scanning system in a narrow beam, good geometry setup, and the thickness loss is estimated from the measured transmitted intensity. Radiography of the MS plates is carried out using 160 kV x-ray machine. The digitized radiographs with a resolution of 50 μm are processed for the detection of corrosion damage in five different locations. The thickness losses due to the corrosion of the MS plate obtained by gamma scattering method are compared with those values obtained by gammatography and radiography techniques. The percentage thickness loss estimated at different positions of the corroded MS plate varies from 17.78 to 27.0, from 18.9 to 24.28, and from 18.9 to 24.28 by gamma scattering, gammatography, and radiography techniques, respectively. Overall, these results are consistent and in line with each other.
Krag, Kristian; Poulsen, Nina A; Larsen, Mette K; Larsen, Lotte B; Janss, Luc L; Buitenhuis, Bart
2013-09-11
For several years, in human nutrition there has been a focus on the proportion of unsaturated fatty acids (UFA) and saturated fatty acids (SFA) found in bovine milk. The positive health-related properties of UFA versus SFA have increased the demand for food products with a higher proportion of UFA. To be able to change the UFA and SFA content of the milk by breeding it is important to know whether there is a genetic component underlying the individual FA in the milk. We have estimated the heritability for individual FA in the milk of Danish Holstein. For this purpose we used information of SNP markers instead of the traditional pedigree relationships. Estimates of heritability were moderate within the range of 0.10 for C18:1 trans-11 to 0.34 for C8:0 and C10:0, whereas the estimates for saturated fatty acids and unsaturated fatty acids were 0.14 and 0.18, respectively. Posterior standard deviations were in the range from 0.07 to 0.17. The correlation estimates showed a general pattern of two groups, one group mainly consisting of saturated fatty acids and one group mainly consisting of unsaturated fatty acids. The phenotypic correlation ranged from -0.95 (saturated fatty acids and unsaturated fatty acids) to 0.99 (unsaturated fatty acids and monounsaturated fatty acids) and the genomic correlation for fatty acids ranged from -0.29 to 0.91. The heritability estimates obtained in this study are in general accordance with heritability estimates from studies using pedigree data and/or a genomic relationship matrix in the context of a REML approach. SFA and UFA expressed a strong negative phenotypic correlation and a weaker genetic correlation. This is in accordance with the theory that SFA is synthesized de novo, while UFA can be regulated independently from the regulation of SFA by the feeding regime.
NASA Astrophysics Data System (ADS)
Pons, A.; David, C.; Fortin, J.; Stanchits, S.; MenéNdez, B.; Mengus, J. M.
2011-03-01
To investigate the effect of compaction bands (CB) on fluid flow, capillary imbibition experiments were performed on Bentheim sandstone specimens (initial porosity ˜22.7%) using an industrial X-ray scanner. We used a three-step procedure combining (1) X-ray imaging of capillary rise in intact Bentheim sandstone, (2) formation of compaction band under triaxial tests, at 185 MPa effective pressure, with acoustic emissions (AE) recording for localization of the induced damage, and (3) again X-ray imaging of capillary rise in the damaged specimens after the unloading. The experiments were performed on intact cylindrical specimens, 5 cm in diameter and 10.5 cm in length, cored in different orientations (parallel or perpendicular to the bedding). Analysis of the images obtained at different stages of the capillary imbibition shows that the presence of CB slows down the imbibition and disturbs the geometry of water flow. In addition, we show that the CB geometry derived from X-ray density maps analysis is well correlated with the AE location obtained during triaxial test. The analysis of the water front kinetics was conducted using a simple theoretical model, which allowed us to confirm that compaction bands act as a barrier for fluid flow, not fully impermeable though. We estimate a contrast of permeability of a factor of ˜3 between the host rock and the compaction bands. This estimation of the permeability inside the compaction band is consistent with estimations done in similar sandstones from field studies but differs by 1 order of magnitude from estimations from previous laboratory measurements.
Van Inghelandt, Delphine; Melchinger, Albrecht E; Lebreton, Claude; Stich, Benjamin
2010-05-01
Information about the genetic diversity and population structure in elite breeding material is of fundamental importance for the improvement of crops. The objectives of our study were to (a) examine the population structure and the genetic diversity in elite maize germplasm based on simple sequence repeat (SSR) markers, (b) compare these results with those obtained from single nucleotide polymorphism (SNP) markers, and (c) compare the coancestry coefficient calculated from pedigree records with genetic distance estimates calculated from SSR and SNP markers. Our study was based on 1,537 elite maize inbred lines genotyped with 359 SSR and 8,244 SNP markers. The average number of alleles per locus, of group specific alleles, and the gene diversity (D) were higher for SSRs than for SNPs. Modified Roger's distance (MRD) estimates and membership probabilities of the STRUCTURE matrices were higher for SSR than for SNP markers but the germplasm organization in four heterotic pools was consistent with STRUCTURE results based on SSRs and SNPs. MRD estimates calculated for the two marker systems were highly correlated (0.87). Our results suggested that the same conclusions regarding the structure and the diversity of heterotic pools could be drawn from both markers types. Furthermore, although our results suggested that the ratio of the number of SSRs and SNPs required to obtain MRD or D estimates with similar precision is not constant across the various precision levels, we propose that between 7 and 11 times more SNPs than SSRs should be used for analyzing population structure and genetic diversity.
Estimation and mapping of uranium content of geological units in France.
Ielsch, G; Cuney, M; Buscail, F; Rossi, F; Leon, A; Cushing, M E
2017-01-01
In France, natural radiation accounts for most of the population exposure to ionizing radiation. The Institute for Radiological Protection and Nuclear Safety (IRSN) carries out studies to evaluate the variability of natural radioactivity over the French territory. In this framework, the present study consisted in the evaluation of uranium concentrations in bedrocks. The objective was to provide estimate of uranium content of each geological unit defined in the geological map of France (1:1,000,000). The methodology was based on the interpretation of existing geochemical data (results of whole rock sample analysis) and the knowledge of petrology and lithology of the geological units, which allowed obtaining a first estimate of the uranium content of rocks. Then, this first estimate was improved thanks to some additional information. For example, some particular or regional sedimentary rocks which could present uranium contents higher than those generally observed for these lithologies, were identified. Moreover, databases on mining provided information on the location of uranium and coal/lignite mines and thus indicated the location of particular uranium-rich rocks. The geological units, defined from their boundaries extracted from the geological map of France (1:1,000,000), were finally classified into 5 categories based on their mean uranium content. The map obtained provided useful data for establishing the geogenic radon map of France, but also for mapping countrywide exposure to terrestrial radiation and for the evaluation of background levels of natural radioactivity used for impact assessment of anthropogenic activities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multilevel Modeling with Correlated Effects
ERIC Educational Resources Information Center
Kim, Jee-Seon; Frees, Edward W.
2007-01-01
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…
Model for Increasing the Power Obtained from a Thermoelectric Generator Module
NASA Astrophysics Data System (ADS)
Huang, Gia-Yeh; Hsu, Cheng-Ting; Yao, Da-Jeng
2014-06-01
We have developed a model for finding the most efficient way of increasing the power obtained from a thermoelectric generator (TEG) module with a variety of operating conditions and limitations. The model is based on both thermoelectric principles and thermal resistance circuits, because a TEG converts heat into electricity consistent with these two theories. It is essential to take into account thermal contact resistance when estimating power generation. Thermal contact resistance causes overestimation of the measured temperature difference between the hot and cold sides of a TEG in calculation of the theoretical power generated, i.e. the theoretical power is larger than the experimental power. The ratio of the experimental open-loop voltage to the measured temperature difference, the effective Seebeck coefficient, can be used to estimate the thermal contact resistance in the model. The ratio of the effective Seebeck coefficient to the theoretical Seebeck coefficient, the Seebeck coefficient ratio, represents the contact conditions. From this ratio, a relationship between performance and different variables can be developed. The measured power generated by a TEG module (TMH400302055; Wise Life Technology, Taiwan) is consistent with the result obtained by use of the model; the relative deviation is 10%. Use of this model to evaluate the most efficient means of increasing the generated power reveals that the TEG module generates 0.14 W when the temperature difference is 25°C and the Seebeck coefficient ratio is 0.4. Several methods can be used triple the amount of power generated. For example, increasing the temperature difference to 43°C generates 0.41 W power; improving the Seebeck coefficient ratio to 0.65 increases the power to 0.39 W; simultaneously increasing the temperature difference to 34°C and improving the Seebeck coefficient ratio to 0.5 increases the power to 0.41 W. Choice of the appropriate method depends on the limitations of system, the cost, and the environment.
Fractal Dimension Analysis of Transient Visual Evoked Potentials: Optimisation and Applications.
Boon, Mei Ying; Henry, Bruce Ian; Chu, Byoung Sun; Basahi, Nour; Suttle, Catherine May; Luu, Chi; Leung, Harry; Hing, Stephen
2016-01-01
The visual evoked potential (VEP) provides a time series signal response to an external visual stimulus at the location of the visual cortex. The major VEP signal components, peak latency and amplitude, may be affected by disease processes. Additionally, the VEP contains fine detailed and non-periodic structure, of presently unclear relevance to normal function, which may be quantified using the fractal dimension. The purpose of this study is to provide a systematic investigation of the key parameters in the measurement of the fractal dimension of VEPs, to develop an optimal analysis protocol for application. VEP time series were mathematically transformed using delay time, τ, and embedding dimension, m, parameters. The fractal dimension of the transformed data was obtained from a scaling analysis based on straight line fits to the numbers of pairs of points with separation less than r versus log(r) in the transformed space. Optimal τ, m, and scaling analysis were obtained by comparing the consistency of results using different sampling frequencies. The optimised method was then piloted on samples of normal and abnormal VEPs. Consistent fractal dimension estimates were obtained using τ = 4 ms, designating the fractal dimension = D2 of the time series based on embedding dimension m = 7 (for 3606 Hz and 5000 Hz), m = 6 (for 1803 Hz) and m = 5 (for 1000Hz), and estimating D2 for each embedding dimension as the steepest slope of the linear scaling region in the plot of log(C(r)) vs log(r) provided the scaling region occurred within the middle third of the plot. Piloting revealed that fractal dimensions were higher from the sampled abnormal than normal achromatic VEPs in adults (p = 0.02). Variances of fractal dimension were higher from the abnormal than normal chromatic VEPs in children (p = 0.01). A useful analysis protocol to assess the fractal dimension of transformed VEPs has been developed.
Slip model and Synthetic Broad-band Strong Motions for the 2015 Mw 8.3 Illapel (Chile) Earthquake.
NASA Astrophysics Data System (ADS)
Aguirre, P.; Fortuno, C.; de la Llera, J. C.
2017-12-01
The MW 8.3 earthquake that occurred on September 16th 2015 west of Illapel, Chile, ruptured a 200 km section of the plate boundary between 29º S and 33º S. SAR data acquired by the Sentinel 1A satellite was used to obtain the interferogram of the earthquake, and from it, the component of the displacement field of the surface in the line of sight of the satellite. Based on this interferogram, the corresponding coseismic slip distribution for the earthquake was determined based on different plausible finite fault geometries. The model that best fits the data gathered is one whose rupture surface is consistent with the Slab 1.0 model, with a constant strike angle of 4º and variable dip angle ranging from 2.7º near the trench to 24.3º down dip. Using this geometry the maximum slip obtained is 7.52 m and the corresponding seismic moment is 3.78·1021 equivalent to a moment magnitude Mw 8.3. Calculation of the Coulomb failure stress change induced by this slip distribution evidences a strong correlation between regions where stress is increased as consequence of the earthquake, and the occurrence of the most relevant aftershocks, providing a consistency check for the inversion procedure applied and its results.The finite fault model for the Illapel earthquake is used to test a hybrid methodology for generation of synthetic ground motions that combines a deterministic calculation of the low frequency content, with stochastic modelling of the high frequency signal. Strong ground motions are estimated at the location of seismic stations recording the Illapel earthquake. Such simulations include the effect of local soil conditions, which are modelled empirically based on H/V ratios obtained from a large database of historical seismic records. Comparison of observed and synthetic records based on the 5%-damped response spectra yield satisfactory results for locations where the site response function is more robustly estimated.
Crespo, André L B; Spencer, Terence A; Nekl, Emily; Pusztai-Carey, Marianne; Moar, William J; Siegfried, Blair D
2008-01-01
Standardization of toxin preparations derived from Bacillus thuringiensis (Berliner) used in laboratory bioassays is critical for accurately assessing possible changes in the susceptibility of field populations of target pests. Different methods were evaluated to quantify Cry1Ab, the toxin expressed by 80% of the commercially available transgenic maize that targets the European corn borer, Ostrinia nubilalis (Hübner). We compared three methods of quantification on three different toxin preparations from independent sources: enzyme-linked immunosorbent assay (ELISA), sodium dodecyl sulfate-polyacrylamide gel electrophoresis and densitometry (SDS-PAGE/densitometry), and the Bradford assay for total protein. The results were compared to those obtained by immunoblot analysis and with the results of toxin bioassays against susceptible laboratory colonies of O. nubilalis. The Bradford method resulted in statistically higher estimates than either ELISA or SDS-PAGE/densitometry but also provided the lowest coefficients of variation (CVs) for estimates of the Cry1Ab concentration (from 2.4 to 5.4%). The CV of estimates obtained by ELISA ranged from 12.8 to 26.5%, whereas the CV of estimates obtained by SDS-PAGE/densitometry ranged from 0.2 to 15.4%. We standardized toxin concentration by using SDS-PAGE/densitometry, which is the only method specific for the 65-kDa Cry1Ab protein and is not confounded by impurities detected by ELISA and Bradford assay for total protein. Bioassays with standardized Cry1Ab preparations based on SDS-PAGE/densitometry showed no significant differences in LC(50) values, although there were significant differences in growth inhibition for two of the three Cry1Ab preparations. However, the variation in larval weight caused by toxin source was only 4% of the total variation, and we conclude that standardization of Cry1Ab production and quantification by SDS-PAGE/densitometry may improve data consistency in monitoring efforts to identify changes in insect susceptibility to Cry1Ab.
NASA Astrophysics Data System (ADS)
Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.
2015-03-01
Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.
NASA Astrophysics Data System (ADS)
Verma, A. K.; Fienga, A.; Laskar, J.; Issautier, K.; Manche, H.; Gastineau, M.
2013-02-01
The Mars Global Surveyor (MGS), Mars Express (MEX), and Venus Express (VEX) experienced several superior solar conjunctions. These conjunctions cause severe degradations of radio signals when the line of sight between the Earth and the spacecraft passes near to the solar corona region. The primary objective of this work is to deduce a solar corona model from the spacecraft navigation data acquired at the time of solar conjunctions and to estimate its average electron density. The corrected or improved data are then used to fit the dynamical modeling of the planet motions, called planetary ephemerides. We analyzed the radio science raw data of the MGS spacecraft using the orbit determination software GINS. The range bias, obtained from GINS and provided by ESA for MEX and VEX, are then used to derive the electron density profile. These profiles are obtained for different intervals of solar distances: from 12 R⊙ to 215 R⊙ for MGS, 6 R⊙ to 152 R⊙ for MEX, and from 12 R⊙ to 154 R⊙ for VEX. They are acquired for each spacecraft individually, for ingress and egress phases separately and both phases together, for different types of solar winds (fast, slow), and for solar activity phases (minimum, maximum). We compared our results with the previous estimations that were based on in situ measurements, and on solar type III radio and radio science studies made at different phases of solar activity and at different solar wind states. Our results are consistent with estimations obtained by these different methods. Moreover, fitting the planetary ephemerides including complementary data that were corrected for the solar corona perturbations, noticeably improves the extrapolation capability of the planetary ephemerides and the estimation of the asteroids masses. Tables 5, 6 and Appendix A are available in electronic form at http://www.aanda.org
Spin dynamics in the stripe-ordered buckled honeycomb lattice antiferromagnet Ba 2 NiTeO 6
Asai, Shinichiro; Soda, Minoru; Kasatani, Kazuhiro; ...
2017-09-01
We carried out inelastic neutron scattering experiments on a buckled honeycomb lattice antiferromagnet Ba 2NiTeO 6 exhibiting a stripe structure at a low temperature. Magnetic excitations are observed in the energy range of ℏω≲10 meV having an anisotropy gap of 2 meV at 2 K. We perform spin-wave calculations to identify the spin model. The obtained microscopic parameters are consistent with the location of the stripe structure in the classical phase diagram. Furthermore, the Weiss temperature independently estimated from a bulk magnetic susceptibility is consistent with the microscopic parameters. The results reveal that a competition between the nearest-neighbor and next-nearest-neighbormore » interactions that together with a relatively large single-ion magnetic anisotropy stabilize the stripe magnetic structure.« less
Spin dynamics in the stripe-ordered buckled honeycomb lattice antiferromagnet Ba 2 NiTeO 6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asai, Shinichiro; Soda, Minoru; Kasatani, Kazuhiro
We carried out inelastic neutron scattering experiments on a buckled honeycomb lattice antiferromagnet Ba 2NiTeO 6 exhibiting a stripe structure at a low temperature. Magnetic excitations are observed in the energy range of ℏω≲10 meV having an anisotropy gap of 2 meV at 2 K. We perform spin-wave calculations to identify the spin model. The obtained microscopic parameters are consistent with the location of the stripe structure in the classical phase diagram. Furthermore, the Weiss temperature independently estimated from a bulk magnetic susceptibility is consistent with the microscopic parameters. The results reveal that a competition between the nearest-neighbor and next-nearest-neighbormore » interactions that together with a relatively large single-ion magnetic anisotropy stabilize the stripe magnetic structure.« less
Moench, A.F.; Garabedian, Stephen P.; LeBlanc, Denis R.
2000-01-01
An aquifer test conducted in a sand and gravel, glacial outwash deposit on Cape Cod, Massachusetts was analyzed by means of a model for flow to a partially penetrating well in a homogeneous, anisotropic unconfined aquifer. The model is designed to account for all significant mechanisms expected to influence drawdown in observation piezometers and in the pumped well. In addition to the usual fluid-flow and storage processes, additional processes include effects of storage in the pumped well, storage in observation piezometers, effects of skin at the pumped-well screen, and effects of drainage from the zone above the water table. The aquifer was pumped at a rate of 320 gallons per minute for 72-hours and drawdown measurements were made in the pumped well and in 20 piezometers located at various distances from the pumped well and depths below the land surface. To facilitate the analysis, an automatic parameter estimation algorithm was used to obtain relevant unconfined aquifer parameters, including the saturated thickness and a set of empirical parameters that relate to gradual drainage from the unsaturated zone. Drainage from the unsaturated zone is treated in this paper as a finite series of exponential terms, each of which contains one empirical parameter that is to be determined. It was necessary to account for effects of gradual drainage from the unsaturated zone to obtain satisfactory agreement between measured and simulated drawdown, particularly in piezometers located near the water table. The commonly used assumption of instantaneous drainage from the unsaturated zone gives rise to large discrepancies between measured and predicted drawdown in the intermediate-time range and can result in inaccurate estimates of aquifer parameters when automatic parameter estimation procedures are used. The values of the estimated hydraulic parameters are consistent with estimates from prior studies and from what is known about the aquifer at the site. Effects of heterogeneity at the site were small as measured drawdowns in all piezometers and wells were very close to the simulated values for a homogeneous porous medium. The estimated values are: specific yield, 0.26; saturated thickness, 170 feet; horizontal hydraulic conductivity, 0.23 feet per minute; vertical hydraulic conductivity, 0.14 feet per minute; and specific storage, 1.3x10-5 per foot. It was found that drawdown in only a few piezometers strategically located at depth near the pumped well yielded parameter estimates close to the estimates obtained for the entire data set analyzed simultaneously. If the influence of gradual drainage from the unsaturated zone is not taken into account, specific yield is significantly underestimated even in these deep-seated piezometers. This helps to explain the low values of specific yield often reported for granular aquifers in the literature. If either the entire data set or only the drawdown in selected deep-seated piezometers was used, it was found unnecessary to conduct the test for the full 72-hours to obtain accurate estimates of the hydraulic parameters. For some piezometer groups, practically identical results would be obtained for an aquifer test conducted for only 8-hours. Drawdowns measured in the pumped well and piezometers at distant locations were diagnostic only of aquifer transmissivity.
The gas and stellar mass of low-redshift damped Lyman-α absorbers
NASA Astrophysics Data System (ADS)
Kanekar, Nissim; Neeleman, Marcel; Prochaska, J. Xavier; Ghosh, Tapasi
2018-01-01
We report Hubble Space Telescope Cosmic Origins Spectrograph far-ultraviolet and Arecibo Telescope H I 21 cm spectroscopy of six damped and sub-damped Lyman-α absorbers (DLAs and sub-DLAs, respectively) at z ≲ 0.1, which have yielded estimates of their H I column density, metallicity and atomic gas mass. This significantly increases the number of DLAs with gas mass estimates, allowing the first comparison between the gas masses of DLAs and local galaxies. Including three absorbers from the literature, we obtain H I masses ≈(0.24-5.2) × 109 M⊙, lower than the knee of the local H I mass function. This implies that massive galaxies do not dominate the absorption cross-section for low-z DLAs. We use Sloan Digital Sky Survey photometry and spectroscopy to identify the likely hosts of four absorbers, obtaining low stellar masses, ≈107-108.4 M⊙, in all cases, consistent with the hosts being dwarf galaxies. We obtain high H I 21 cm or CO emission line widths, ΔV20 ≈ 100-290 km s-1, and high gas fractions, fH I ≈ 5-100, suggesting that the absorber hosts are gas-rich galaxies with low star formation efficiencies. However, the H I 21 cm velocity spreads (≳100 km s-1) appear systematically larger than the velocity spreads in typical dwarf galaxies.
Brightness temperature - obtaining the physical properties of a non-equipartition plasma
NASA Astrophysics Data System (ADS)
Nokhrina, E. E.
2017-06-01
The limit on the intrinsic brightness temperature, attributed to `Compton catastrophe', has been established being 1012 K. Somewhat lower limit of the order of 1011.5 K is implied if we assume that the radiating plasma is in equipartition with the magnetic field - the idea that explained why the observed cores of active galactic nuclei (AGNs) sustained the limit lower than the `Compton catastrophe'. Recent observations with unprecedented high resolution by the RadioAstron have revealed systematic exceed in the observed brightness temperature. We propose means of estimating the degree of the non-equipartition regime in AGN cores. Coupled with the core-shift measurements, the method allows us to independently estimate the magnetic field strength and the particle number density at the core. We show that the ratio of magnetic energy to radiating plasma energy is of the order of 10-5, which means the flow in the core is dominated by the particle energy. We show that the magnetic field obtained by the brightness temperature measurements may be underestimated. We propose for the relativistic jets with small viewing angles the non-uniform magnetohydrodynamic model and obtain the expression for the magnetic field amplitude about two orders higher than that for the uniform model. These magnetic field amplitudes are consistent with the limiting magnetic field suggested by the `magnetically arrested disc' model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallimore, David L.
2012-06-13
The measurement uncertainty estimatino associated with trace element analysis of impurities in U and Pu was evaluated using the Guide to the Expression of Uncertainty Measurement (GUM). I this evalution the uncertainty sources were identified and standard uncertainties for the components were categorized as either Type A or B. The combined standard uncertainty was calculated and a coverage factor k = 2 was applied to obtain the expanded uncertainty, U. The ICP-AES and ICP-MS methods used were deveoped for the multi-element analysis of U and Pu samples. A typical analytical run consists of standards, process blanks, samples, matrix spiked samples,more » post digestion spiked samples and independent calibration verification standards. The uncertainty estimation was performed on U and Pu samples that have been analyzed previously as part of the U and Pu Sample Exchange Programs. Control chart results and data from the U and Pu metal exchange programs were combined with the GUM into a concentration dependent estimate of the expanded uncertainty. Comparison of trace element uncertainties obtained using this model was compared to those obtained for trace element results as part of the Exchange programs. This process was completed for all trace elements that were determined to be above the detection limit for the U and Pu samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Robert Y., E-mail: rx-tang@laurentian.ca; McDonald, Nancy, E-mail: mcdnancye@gmail.com; Laamanen, Curtis, E-mail: cx-laamanen@laurentian.ca
Purpose: To develop a method to estimate the mean fractional volume of fat (ν{sup ¯}{sub fat}) within a region of interest (ROI) of a tissue sample for wide-angle x-ray scatter (WAXS) applications. A scatter signal from the ROI was obtained and use of ν{sup ¯}{sub fat} in a WAXS fat subtraction model provided a way to estimate the differential linear scattering coefficient μ{sub s} of the remaining fatless tissue. Methods: The efficacy of the method was tested using animal tissue from a local butcher shop. Formalin fixed samples, 5 mm in diameter 4 mm thick, were prepared. The two mainmore » tissue types were fat and meat (fibrous). Pure as well as composite samples consisting of a mixture of the two tissue types were analyzed. For the latter samples, ν{sub fat} for the tissue columns of interest were extracted from corresponding pixels in CCD digital x-ray images using a calibration curve. The means ν{sup ¯}{sub fat} were then calculated for use in a WAXS fat subtraction model. For the WAXS measurements, the samples were interrogated with a 2.7 mm diameter 50 kV beam and the 6° scattered photons were detected with a CdTe detector subtending a solid angle of 7.75 × 10{sup −5} sr. Using the scatter spectrum, an estimate of the incident spectrum, and a scatter model, μ{sub s} was determined for the tissue in the ROI. For the composite samples, a WAXS fat subtraction model was used to estimate the μ{sub s} of the fibrous tissue in the ROI. This signal was compared to μ{sub s} of fibrous tissue obtained using a pure fibrous sample. Results: For chicken and beef composites, ν{sup ¯}{sub fat}=0.33±0.05 and 0.32 ± 0.05, respectively. The subtractions of these fat components from the WAXS composite signals provided estimates of μ{sub s} for chicken and beef fibrous tissue. The differences between the estimates and μ{sub s} of fibrous obtained with a pure sample were calculated as a function of the momentum transfer x. A t-test showed that the mean of the differences did not vary from zero in a statistically significant way thereby validating the methods. Conclusions: The methodology to estimate ν{sup ¯}{sub fat} in a ROI of a tissue sample via CCD x-ray imaging was quantitatively accurate. The WAXS fat subtraction model allowed μ{sub s} of fibrous tissue to be obtained from a ROI which had some fat. The fat estimation method coupled with the WAXS models can be used to compare μ{sub s} coefficients of fibroglandular and cancerous breast tissue.« less
NASA Astrophysics Data System (ADS)
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically active radiation (PAR), calculated as the product of the fraction of absorbed PAR (fPAR) and PAR flux. The proposed algorithm performs well when evaluated against flux tower GPP (R2=0.79, RMSE= 1.93 µmol m2 s-1). Here we use GPP and T estimates previously derived at the same 16 Fluxnet sites to analyse WUE. Satellite-derived WUE explained variation in (long-term average) WUE among plant functional types but evergreen needleleaf had higher WUE than predicted. The benefit of our approach is that it uses mutually consistent estimates of GPP and T to derive canopy-level WUE without any land cover classification artefacts. References Baldocchi, D. (2008). Turner Review No. 15: 'Breathing' of the terrestrial biosphere: lessons learned from a global network of carbon dioxide flux measurement systems. Australian Journal of Botany, 56, 26 Kelliher, F.M., Leuning, R., Raupach, M.R., & Schulze, E.D. (1995). Maximum conductances for evaporation from global vegetation types. Agricultural and Forest Meteorology, 73, 1-16 Yebra, M., Van Dijk, A., Leuning, R., Huete, A., & Guerschman, J.P. (2013). Evaluation of optical remote sensing to estimate actual evapotranspiration and canopy conductance. Remote Sensing of Environment, 129, 250-261
Interface tension in the improved Blume-Capel model
NASA Astrophysics Data System (ADS)
Hasenbusch, Martin
2017-09-01
We study interfaces with periodic boundary conditions in the low-temperature phase of the improved Blume-Capel model on the simple cubic lattice. The interface free energy is defined by the difference of the free energy of a system with antiperiodic boundary conditions in one of the directions and that of a system with periodic boundary conditions in all directions. It is obtained by integration of differences of the corresponding internal energies over the inverse temperature. These differences can be computed efficiently by using a variance reduced estimator that is based on the exchange cluster algorithm. The interface tension is obtained from the interface free energy by using predictions based on effective interface models. By using our numerical results for the interface tension σ and the correlation length ξ obtained in previous work, we determine the universal amplitude ratios R2 nd ,+=σ0f2nd ,+ 2=0.3863 (6 ) , R2 nd ,-=σ0f2nd ,- 2=0.1028 (1 ) , and Rexp ,-=σ0fexp,- 2=0.1077 (3 ) . Our results are consistent with those obtained previously for the three-dimensional Ising model, confirming the universality hypothesis.
Spectral parameters and Hamaker constants of silicon hydride compounds and organic solvents.
Masuda, Takashi; Matsuki, Yasuo; Shimoda, Tatsuya
2009-12-15
Cyclopentasilane (CPS) and polydihydrosilane, which consist of hydrogen and silicon only, are unique materials that can be used to produce intrinsic silicon film in a liquid process, such as spin coating or an ink-jet method. Wettability and solubility of general organic solvents including the above can be estimated by Hamaker constants, which are calculated according to the Lifshitz theory. In order to calculate a Hamaker constant by the simple spectral method (SSM), it is necessary to obtain absorption frequency and function of oscillator strength in the ultraviolet region. In this report, these physical quantities were obtained by means of an optical method. As a result of examination of the relation between molecular structures and ultraviolet absorption frequencies, which were obtained from various liquid materials, it was concluded that ultraviolet absorption frequencies became smaller as electrons were delocalized. In particular, the absorption frequencies were found to be very small for CPS and polydihydrosilane due to sigma-conjugate of their electrons. The Hamaker constants of CPS and polydihydrosilane were successfully calculated based on the obtained absorption frequency and function of oscillator strength.
NASA Astrophysics Data System (ADS)
Soja, A. C.; Sodré, L.; Monteiro-Oliveira, R.; Cypriano, E. S.; Lima Neto, G. B.
2018-07-01
We revisit the galaxy cluster RXC J1504-0248, a remarkable example of a structure with a strong cool core in a near redshift (z = 0.216). We performed a combined analysis using photometric and spectroscopic data obtained at Gemini South Telescope. We estimated the cluster mass through gravitational lensing, obtaining M200 = 5.3 ± 0.4 × 1014 h_{70}^{-1} M⊙ within R200 = 1.56 ± 0.04 h^{-1}_{70} Mpc, in agreement with a virial mass estimate. This cluster presents a prominent filamentary structure associated with its brightest cluster galaxy, located mainly along its major axis and aligned with the X-ray emission. A combined study of three emission line diagnostic diagrams has shown that the filament emission falls in the so-called transition region of these diagrams. Consequently, several ionizing sources should be playing a meaningful role. We have argued that old stars, often invoked to explain low-ionization nuclear emission-line region emission, should not be the major source of ionization. We have noticed that most of the filamentary emission has line ratios consistent with the shock excitation limits obtained from shock models. We also found that line fluxes are related to gas velocities (here estimated from line widths) by power laws with slopes in the range expected from shock models. These models also show, however, that only ˜10 per cent of H α luminosity can be explained by shocks. We conclude that shocks probably associated with the cooling of the intracluster gas in a filamentary structure may indeed be contributing to the filament nebular emission, but cannot be the major source of ionizing photons.
NASA Astrophysics Data System (ADS)
Sebastian, Nita; Kim, Seongryong; Tkalčić, Hrvoje; Sippl, Christian
2017-04-01
The purpose of this study is to develop an integrated inference on the lithospheric structure of NE China using three passive seismic networks comprised of 92 stations. The NE China plain consists of complex lithospheric domains characterised by the co-existence of complex geodynamic processes such as crustal thinning, active intraplate cenozoic volcanism and low velocity anomalies. To estimate lithospheric structures with greater detail, we chose to perform the joint inversion of independent data sets such as receiver functions and surface wave dispersion curves (group and phase velocity). We perform a joint inversion based on principles of Bayesian transdimensional optimisation techniques (Kim etal., 2016). Unlike in the previous studies of NE China, the complexity of the model is determined from the data in the first stage of the inversion, and the data uncertainty is computed based on Bayesian statistics in the second stage of the inversion. The computed crustal properties are retrieved from an ensemble of probable models. We obtain major structural inferences with well constrained absolute velocity estimates, which are vital for inferring properties of the lithosphere and bulk crustal Vp/Vs ratio. The Vp/Vs estimate obtained from joint inversions confirms the high Vp/Vs ratio ( 1.98) obtained using the H-Kappa method beneath some stations. Moreover, we could confirm the existence of a lower crustal velocity beneath several stations (eg: station SHS) within the NE China plain. Based on these findings we attempt to identify a plausible origin for structural complexity. We compile a high-resolution 3D image of the lithospheric architecture of the NE China plain.
Santos, Reginaldo da S; Faria, Guilherme A; Giles, Carlos; Leite, Carlos A P; Barbosa, Herbert de S; Arruda, Marco A Z; Longo, Claudia
2012-10-24
Iron-doped TiO(2) (Fe:TiO(2)) nanoparticles were synthesized by the sol-gel method (with Fe/Ti molar ratio corresponding to 1, 3, and 5%), followed by hydrothermal treatment, drying, and annealing. A similar methodology was used to synthesize TiO(2) and α-Fe(2)O(3) nanoparticles. For comparison, a mixture hematite/titania, with Fe/Ti = 4% was also investigated. Characterization of the samples using Rietveld refinement of X-ray diffraction data revealed that TiO(2) consisted of 82% anatase and 18% brookite; for Fe:TiO(2), brookite increased to 30% and hematite was also identified (0.5, 1.0, and 1.2 wt % for samples prepared with 1, 3, and 5% of Fe/Ti). For hematite/titania mixture, Fe/Ti was estimated as 4.4%, indicating the Rietveld method reliability for estimation of phase composition. Because the band gap energy, estimated as 3.2 eV for TiO(2), gradually ranged from 3.0 to 2.7 eV with increasing Fe content at Fe:TiO(2), it can be assumed that a Fe fraction was also inserted as dopant in the TiO(2) lattice. Extended X-ray absorption fine structure spectra obtained for the Ti K-edge and Fe K-edge indicated that absorbing Fe occupied a Ti site in the TiO(2) lattice, but hematite features were not observed. Hematite particles also could not be identified in the images obtained by transmission electron microscopy, in spite of iron identification by elemental mapping, suggesting that hematite can be segregated at the grain boundaries of Fe:TiO(2).
NASA Astrophysics Data System (ADS)
Soja, A. C.; Sodré, L., Jr.; Monteiro-Oliveira, R.; Cypriano, E. S.; Lima Neto, G. B.
2018-03-01
We revisit the galaxy cluster RXC J1504-0248, a remarkable example of a structure with a strong cool core in a near redshift (z = 0.216). We performed a combined analysis using photometric and spectroscopic data obtained at Gemini South Telescope. We estimated the cluster mass through gravitational lensing, obtaining M200 = 5.3 ± 0.4 × 1014h_{70}^{-1} M⊙ within R200 = 1.56 ± 0.04 h^{-1}_{70} Mpc, in agreement with a virial mass estimate. This cluster presents a prominent filamentary structure associated to its BCG, located mainly along its major axis and aligned with the X-ray emission. A combined study of three emission line diagnostic diagrams has shown that the filament emission falls in the so-called transition region of these diagrams. Consequently, several ionizing sources should be playing an meaningful role. We have argued that old stars, often invoked to explain LINER emission, should not be the major source of ionization. We have noticed that most of the filamentary emission has line ratios consistent with the shock excitation limits obtained from shock models. We also found that line fluxes are related to gas velocities (here estimated from line widths) by power-laws with slopes in the range expected from shock models. These models also show, however, that only ˜10% of Hα luminosity can be explained by shocks. We conclude that shocks probably associated to the cooling of the intracluster gas in a filamentary structure may indeed be contributing to the filament nebular emission, but can not be the major source of ionizing photons.
Nebuya, Satoru; Koike, Tomotaka; Imai, Hiroshi; Iwashita, Yoshiaki; Brown, Brian H; Soma, Kazui
2015-06-01
This paper reports on the results of a study which compares lung density values obtained from electrical impedance tomography (EIT), clinical diagnosis and CT values (HU) within a region of interest in the lung. The purpose was to assess the clinical use of lung density estimation using EIT data. In 11 patients supported by a mechanical ventilator, the consistency of regional lung density measurements as estimated by EIT was validated to assess the feasibility of its use in intensive care medicine. There were significant differences in regional lung densities recorded in the supine position between normal lungs and diseased lungs associated with pneumonia, atelectasis and pleural effusion (normal; 240 ± 71.7 kg m(-3), pneumonia; 306 ± 38.6 kg m(-3), atelectasis; 497 ± 130 kg m(-3), pleural effusion; 467 ± 113 kg m(-3): Steel-Dwass test, p < 0.05). In addition, in order to compare lung density with CT image pixels, the image resolution of CT images, which was originally 512 × 512 pixels, was changed to 16 × 16 pixels to match that of the EIT images. The results of CT and EIT images from five patients in an intensive care unit showed a correlation coefficient of 0.66 ± 0.13 between the CT values (HU) and the lung density values (kg m(-3)) obtained from EIT. These results indicate that it may be possible to obtain a quantitative value for regional lung density using EIT.
A geostatistical approach to estimate mining efficiency indicators with flexible meshes
NASA Astrophysics Data System (ADS)
Freixas, Genis; Garriga, David; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2014-05-01
Geostatistics is a branch of statistics developed originally to predict probability distributions of ore grades for mining operations by considering the attributes of a geological formation at unknown locations as a set of correlated random variables. Mining exploitations typically aim to maintain acceptable mineral laws to produce commercial products based upon demand. In this context, we present a new geostatistical methodology to estimate strategic efficiency maps that incorporate hydraulic test data, the evolution of concentrations with time obtained from chemical analysis (packer tests and production wells) as well as hydraulic head variations. The methodology is applied to a salt basin in South America. The exploitation is based on the extraction of brines through vertical and horizontal wells. Thereafter, brines are precipitated in evaporation ponds to obtain target potassium and magnesium salts of economic interest. Lithium carbonate is obtained as a byproduct of the production of potassium chloride. Aside from providing an assemble of traditional geostatistical methods, the strength of this study falls with the new methodology developed, which focus on finding the best sites to exploit the brines while maintaining efficiency criteria. Thus, some strategic indicator efficiency maps have been developed under the specific criteria imposed by exploitation standards to incorporate new extraction wells in new areas that would allow maintain or improve production. Results show that the uncertainty quantification of the efficiency plays a dominant role and that the use flexible meshes, which properly describe the curvilinear features associated with vertical stratification, provides a more consistent estimation of the geological processes. Moreover, we demonstrate that the vertical correlation structure at the given salt basin is essentially linked to variations in the formation thickness, which calls for flexible meshes and non-stationarity stochastic processes.
Rambo, Philip L; Callahan, Jennifer L; Hogan, Lindsey R; Hullmann, Stephanie; Wrape, Elizabeth
2015-01-01
Recent efforts have contributed to significant advances in the detection of malingered performances in adults during cognitive assessment. However, children's ability to purposefully underperform has received relatively little attention. The purpose of the present investigation was to examine children's performances on common intellectual measures, as well as two symptom validity measures: the Test of Memory Malingering and the Dot-Counting Test. This was accomplished through the administration of measures to children ages 6 to 12 years old in randomly assigned full-effort (control) and poor-effort (treatment) conditions. Prior to randomization, children's general intellectual functioning (i.e., IQ) was estimated via administration of the Kaufman Brief Intellectual Battery-Second Edition (KBIT-2). Multivariate analyses revealed that the conditions significantly differed on some but not all administered measures. Specifically, children's estimated IQ in the treatment condition significantly differed from the full-effort IQ initially obtained from the same children on the KBIT-2, as well as from the IQs obtained in the full-effort control condition. These findings suggest that children are fully capable of willfully underperforming during cognitive testing; however, consistent with prior investigations, some measures evidence greater sensitivity than others in evaluating effort.
Radurization of commercial freshwater fish species
NASA Astrophysics Data System (ADS)
Chuaqui-Offermanns, N.; McDougall, T. E.; Sprung, W.; Sullivan, V.
The effect of radurization on the shelf life of fresh Whitefish obtained through ordinary commercial channels has been determined. Whitefish fillets irradiated at 1.2 kGy and stored at 3°C have a shelf life three times longer than the unirradiated fish. When the fish was irradiated at 0.82 kGy a two fold shelf-life extension was obtained. The shelf life was estimated by sensory, chemical and microbiological evaluations. Sensory evaluation involved organoleptic assessment of raw and cooked samples. Since freshwater fish do not contain trimethylamine oxide (TMAO), alternate tests for freshness were required. It was found the determination of hypoxanthine and total volatile acid number (VAN) are excellent tests for freshness and quality of freshwater fish; thus, these analyses were adopted. The degree of radiation-induced lipid oxidation was measured by the thiobarbituric acid test (TBA). It was found at doses of 0.82 and 1.2 kGy the TBA number remained within acceptable limits in all samples. Microbiological analyses consisted of the total microbial load assessment in the sample, as well as Pseudomonas and total psychrotrophic counts. The estimated shelf lives as determined by the three separate evaluations were in very good agreement.
Identification of elastic, dielectric, and piezoelectric constants in piezoceramic disks.
Perez, Nicolas; Andrade, Marco A B; Buiochi, Flavio; Adamowski, Julio C
2010-12-01
Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.