NASA Astrophysics Data System (ADS)
Fan, Jishan; Li, Fucai; Nakamura, Gen
2018-06-01
In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.
King, Leandra; Wakeley, John
2016-09-01
We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.
Estimate of Shock-Hugoniot Adiabat of Liquids from Hydrodynamics
NASA Astrophysics Data System (ADS)
Bouton, E.; Vidal, P.
2007-12-01
Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperatures ranging from 250 K to 360 K.
Gaussian Decomposition of Laser Altimeter Waveforms
NASA Technical Reports Server (NTRS)
Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan
1999-01-01
We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
Estimate of shock-Hugoniot adiabat of liquids from hydrodyamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouton, E.; Vidal, P.
2007-12-12
Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperaturesmore » ranging from 250 K to 360 K.« less
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis
Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel
2012-01-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867
Automated startup of the MIT research reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwok, K.S.
1992-01-01
This summary describes the development, implementation, and testing of a generic method for performing automated startups of nuclear reactors described by space-independent kinetics under conditions of closed-loop digital control. The technique entails first obtaining a reliable estimate of the reactor's initial degree of subcriticality and then substituting that estimate into a model-based control law so as to permit a power increase from subcritical on a demanded trajectory. The estimation of subcriticality is accomplished by application of the perturbed reactivity method. The shutdown reactor is perturbed by the insertion of reactivity at a known rate. Observation of the resulting period permitsmore » determination of the initial degree of subcriticality. A major advantage to this method is that repeated estimates are obtained of the same quantity. Hence, statistical methods can be applied to improve the quality of the calculation.« less
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Cost effectiveness of the Oregon quitline "free patch initiative".
Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John
2007-12-01
We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.
Novel angle estimation for bistatic MIMO radar using an improved MUSIC
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Xiaofei; Chen, Han
2014-09-01
In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.
Parent-child communication and marijuana initiation: evidence using discrete-time survival analysis.
Nonnemaker, James M; Silber-Ashley, Olivia; Farrelly, Matthew C; Dench, Daniel
2012-12-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or - in the case of youth reports of communication - potentially harmful (leading to increased likelihood of marijuana initiation). Copyright © 2012 Elsevier Ltd. All rights reserved.
New learning based super-resolution: use of DWT and IGMRF prior.
Gajjar, Prakash P; Joshi, Manjunath V
2010-05-01
In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.
Parameter estimation in plasmonic QED
NASA Astrophysics Data System (ADS)
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Energy and maximum norm estimates for nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Olsson, Pelle; Oliger, Joseph
1994-01-01
We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.
NASA Technical Reports Server (NTRS)
Finley, Tom D.; Wong, Douglas T.; Tripp, John S.
1993-01-01
A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.
Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data
NASA Technical Reports Server (NTRS)
Vanderesch, A. H.
1972-01-01
Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.
Montana rest area usage : data acquisition and usage estimation.
DOT National Transportation Integrated Search
2011-02-01
The Montana Department of Transportation (MDT) has initiated research to refine the figures employed in the : estimation of Montana rest area use. This work seeks to obtain Montana-specific data related to rest area usage, : including water flow, eff...
Stable boundary conditions and difference schemes for Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Dutt, P.
1985-01-01
The Navier-Stokes equations can be viewed as an incompletely elliptic perturbation of the Euler equations. By using the entropy function for the Euler equations as a measure of energy for the Navier-Stokes equations, it was possible to obtain nonlinear energy estimates for the mixed initial boundary value problem. These estimates are used to derive boundary conditions which guarantee L2 boundedness even when the Reynolds number tends to infinity. Finally, a new difference scheme for modelling the Navier-Stokes equations in multidimensions for which it is possible to obtain discrete energy estimates exactly analogous to those we obtained for the differential equation was proposed.
First Attempt of Orbit Determination of SLR Satellites and Space Debris Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Deleflie, F.; Coulot, D.; Descosta, R.; Fernier, A.; Richard, P.
2013-08-01
We present an orbit determination method based on genetic algorithms. Contrary to usual estimation methods mainly based on least-squares methods, these algorithms do not require any a priori knowledge of the initial state vector to be estimated. These algorithms can be applied when a new satellite is launched or for uncatalogued objects that appear in images obtained from robotic telescopes such as the TAROT ones. We show in this paper preliminary results obtained from an SLR satellite, for which tracking data acquired by the ILRS network enable to build accurate orbital arcs at a few centimeter level, which can be used as a reference orbit ; in this case, the basic observations are made up of time series of ranges, obtained from various tracking stations. We show as well the results obtained from the observations acquired by the two TAROT telescopes on the Telecom-2D satellite operated by CNES ; in that case, the observations are made up of time series of azimuths and elevations, seen from the two TAROT telescopes. The method is carried out in several steps: (i) an analytical propagation of the equations of motion, (ii) an estimation kernel based on genetic algorithms, which follows the usual steps of such approaches: initialization and evolution of a selected population, so as to determine the best parameters. Each parameter to be estimated, namely each initial keplerian element, has to be searched among an interval that is preliminary chosen. The algorithm is supposed to converge towards an optimum over a reasonable computational time.
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
Code of Federal Regulations, 2012 CFR
2012-10-01
..., preproduction engineering, initial rework, initial spoilage, pilot runs, allocable portions of the costs of... should obtain in-house engineering cost estimates identifying the detailed recurring and nonrecurring... cancellation. For example, consider that the total nonrecurring costs (see 15.408, Table 15-2, Formats for...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., preproduction engineering, initial rework, initial spoilage, pilot runs, allocable portions of the costs of... should obtain in-house engineering cost estimates identifying the detailed recurring and nonrecurring... cancellation. For example, consider that the total nonrecurring costs (see 15.408, Table 15-2, Formats for...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., preproduction engineering, initial rework, initial spoilage, pilot runs, allocable portions of the costs of... should obtain in-house engineering cost estimates identifying the detailed recurring and nonrecurring... cancellation. For example, consider that the total nonrecurring costs (see 15.408, Table 15-2, Formats for...
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
On the asymptotic behavior of a subcritical convection-diffusion equation with nonlocal diffusion
NASA Astrophysics Data System (ADS)
Cazacu, Cristian M.; Ignat, Liviu I.; Pazoto, Ademir F.
2017-08-01
In this paper we consider a subcritical model that involves nonlocal diffusion and a classical convective term. In spite of the nonlocal diffusion, we obtain an Oleinik type estimate similar to the case when the diffusion is local. First we prove that the entropy solution can be obtained by adding a small viscous term μ uxx and letting μ\\to 0 . Then, by using uniform Oleinik estimates for the viscous approximation we are able to prove the well-posedness of the entropy solutions with L 1-initial data. Using a scaling argument and hyperbolic estimates given by Oleinik’s inequality, we obtain the first term in the asymptotic behavior of the nonnegative solutions. Finally, the large time behavior of changing sign solutions is proved using the classical flux-entropy method and estimates for the nonlocal operator.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
NASA Astrophysics Data System (ADS)
Godin, Paul
2005-09-01
We consider smooth three-dimensional spherically symmetric Eulerian flows of ideal polytropic gases with variable entropy, whose initial data are obtained by adding a small smooth perturbation with compact support to a constant state. Under a natural assumption, we obtain precise information on the asymptotic behavior of their lifespan when the size of the initial perturbation tends to 0. This is achieved by the construction and estimate of a suitable approximate flow.
Blow-up of solutions to a quasilinear wave equation for high initial energy
NASA Astrophysics Data System (ADS)
Li, Fang; Liu, Fang
2018-05-01
This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].
Objective estimates based on experimental data and initial and final knowledge
NASA Technical Reports Server (NTRS)
Rosenbaum, B. M.
1972-01-01
An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.
Small sample estimation of the reliability function for technical products
NASA Astrophysics Data System (ADS)
Lyamets, L. L.; Yakimenko, I. V.; Kanishchev, O. A.; Bliznyuk, O. A.
2017-12-01
It is demonstrated that, in the absence of big statistic samples obtained as a result of testing complex technical products for failure, statistic estimation of the reliability function of initial elements can be made by the moments method. A formal description of the moments method is given and its advantages in the analysis of small censored samples are discussed. A modified algorithm is proposed for the implementation of the moments method with the use of only the moments at which the failures of initial elements occur.
Application of biological simulation models in estimating feed efficiency of finishing steers.
Williams, C B
2010-07-01
Data on individual daily feed intake, BW at 28-d intervals, and carcass composition were obtained on 1,212 crossbred steers. Within-animal regressions of cumulative feed intake and BW on linear and quadratic days on feed were used to quantify initial and ending BW, average daily observed feed intake (OFI), and ADG over a 120-d finishing period. Feed intake was predicted (PFI) with 3 biological simulation models (BSM): a) Decision Evaluator for the Cattle Industry, b) Cornell Value Discovery System, and c) NRC update 2000, using observed growth and carcass data as input. Residual feed intake (RFI) was estimated using OFI (RFI(EL)) in a linear statistical model (LSM), and feed conversion ratio (FCR) was estimated as OFI/ADG (FCR(E)). Output from the BSM was used to estimate RFI by using PFI in place of OFI with the same LSM, and FCR was estimated as PFI/ADG. These estimates were evaluated against RFI(EL) and FCR(E). In a second analysis, estimates of RFI were obtained for the 3 BSM as the difference between OFI and PFI, and these estimates were evaluated against RFI(EL). The residual variation was extremely small when PFI was used in the LSM to estimate RFI, and this was mainly due to the fact that the same input variables (initial BW, days on feed, and ADG) were used in the BSM and LSM. Hence, the use of PFI obtained with BSM as a replacement for OFI in a LSM to characterize individual animals for RFI was not feasible. This conclusion was also supported by weak correlations (<0.4) between RFI(EL) and RFI obtained with PFI in the LSM, and very weak correlations (<0.13) between RFI(EL) and FCR obtained with PFI. In the second analysis, correlations (>0.89) for RFI(EL) with the other RFI estimates suggest little difference between RFI(EL) and any of these RFI estimates. In addition, results suggest that the RFI estimates calculated with PFI would be better able to identify animals with low OFI and small ADG as inefficient compared with RFI(EL). These results may be due to the fact that computer models predict performance on an individual-animal basis in contrast to a LSM, which estimates a fixed relationship for all animals; hence, the BSM may provide RFI estimates that are closer to the true biological efficiency of animals. In addition, BSM may facilitate comparisons across different data sets and provide more accurate estimates of efficiency in small data sets where errors would be greater with a LSM.
Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm
NASA Astrophysics Data System (ADS)
Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi
2017-11-01
In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Rong; Li, Yongdong; Liu, Chunliang
2016-07-15
The output power fluctuations caused by weights of macro particles used in particle-in-cell (PIC) simulations of a backward wave oscillator and a travelling wave tube are statistically analyzed. It is found that the velocities of electrons passed a specific slow-wave structure form a specific electron velocity distribution. The electron velocity distribution obtained in PIC simulation with a relative small weight of macro particles is considered as an initial distribution. By analyzing this initial distribution with a statistical method, the estimations of the output power fluctuations caused by different weights of macro particles are obtained. The statistical method is verified bymore » comparing the estimations with the simulation results. The fluctuations become stronger with increasing weight of macro particles, which can also be determined reversely from estimations of the output power fluctuations. With the weights of macro particles optimized by the statistical method, the output power fluctuations in PIC simulations are relatively small and acceptable.« less
Initial dynamic load estimates during configuration design
NASA Technical Reports Server (NTRS)
Schiff, Daniel
1987-01-01
This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.
NASA Technical Reports Server (NTRS)
Hill, Jesse K.; Isensee, Joan E.; Cornett, Robert H.; Bohlin, Ralph C.; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Stecher, Theodore P.
1994-01-01
UV stellar photometry is presented for 1563 stars within a 40 minutes circular field in the Large Magellanic Cloud (LMC), excluding the 10 min x 10 min field centered on R136 investigated earlier by Hill et al. (1993). Magnitudes are computed from images obtained by the Ultraviolet Imaging Telescope (UIT) in bands centered at 1615 A and 2558 A. Stellar masses and extinctions are estimated for the stars in associations using the evolutionary models of Schaerer et al. (1993), assuming the age is 4 Myr and that the local LMC extinction follows the Fitzpatrick (1985) 30 Dor extinction curve. The estimated slope of the initial mass function (IMF) for massive stars (greater than 15 solar mass) within the Lucke and Hodge (LH) associations is Gamma = -1.08 +/- 0.2. Initial masses and extinctions for stars not within LH associations are estimated assuming that the stellar age is either 4 Myr or half the stellar lifetime, whichever is larger. The estimated slope of the IMF for massive stars not within LH associations is Gamma = -1.74 +/- 0.3 (assuming continuous star formation), compared with Gamma = -1.35, and Gamma = -1.7 +/- 0.5, obtained for the Galaxy by Salpeter (1955) and Scalo (1986), respectively, and Gamma = -1.6 obtained for massive stars in the Galaxy by Garmany, Conti, & Chiosi (1982). The shallower slope of the association IMF suggests that not only is the star formation rate higher in associations, but that the local conditions favor the formation of higher mass stars there. We make no corrections for binaries or incompleteness.
The initiation of boiling during pressure transients. [water boiling on metal surfaces
NASA Technical Reports Server (NTRS)
Weisman, J.; Bussell, G.; Jashnani, I. L.; Hsieh, T.
1973-01-01
The initiation of boiling of water on metal surfaces during pressure transients has been investigated. The data were obtained by a new technique in which light beam fluctuations and a pressure signal were simultaneously recorded on a dual beam oscilloscope. The results obtained agreed with those obtained using high speed photography. It was found that, for water temperatures between 90-150 C, the wall superheat required to initiate boiling during a rapid pressure transient was significantly higher than required when the pressure was slowly reduced. This result is explained by assuming that a finite time is necessary for vapor to fill the cavity at which the bubble originates. Experimental measurements of this time are in reasonably good agreement with calculations based on the proposed theory. The theory includes a new procedure for estimating the coefficient of vaporization.
An Optimization-Based State Estimatioin Framework for Large-Scale Natural Gas Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Zavala, Victor M.
We propose an optimization-based state estimation framework to track internal spacetime flow and pressure profiles of natural gas networks during dynamic transients. We find that the estimation problem is ill-posed (because of the infinite-dimensional nature of the states) and that this leads to instability of the estimator when short estimation horizons are used. To circumvent this issue, we propose moving horizon strategies that incorporate prior information. In particular, we propose a strategy that initializes the prior using steady-state information and compare its performance against a strategy that does not initialize the prior. We find that both strategies are capable ofmore » tracking the state profiles but we also find that superior performance is obtained with steady-state prior initialization. We also find that, under the proposed framework, pressure sensor information at junctions is sufficient to track the state profiles. We also derive approximate transport models and show that some of these can be used to achieve significant computational speed-ups without sacrificing estimation performance. We show that the estimator can be easily implemented in the graph-based modeling framework Plasmo.jl and use a multipipeline network study to demonstrate the developments.« less
Blow-up for a three dimensional Keller-Segel model with consumption of chemoattractant
NASA Astrophysics Data System (ADS)
Jiang, Jie; Wu, Hao; Zheng, Songmu
2018-04-01
We investigate blow-up properties for the initial-boundary value problem of a Keller-Segel model with consumption of chemoattractant when the spatial dimension is three. Through a kinetic reformulation of the Keller-Segel system, we first derive some higher-order estimates and obtain certain blow-up criteria for the local classical solutions. These blow-up criteria generalize the results in [4,5] from the whole space R3 to the case of bounded smooth domain Ω ⊂R3. Lower global blow-up estimate on ‖ n ‖ L∞ (Ω) is also obtained based on our higher-order estimates. Moreover, we prove local non-degeneracy for blow-up points.
Interpreting Repeated Temperature-Depth Profiles for Groundwater Flow
NASA Astrophysics Data System (ADS)
Bense, Victor F.; Kurylyk, Barret L.; van Daal, Jonathan; van der Ploeg, Martine J.; Carey, Sean K.
2017-10-01
Temperature can be used to trace groundwater flows due to thermal disturbances of subsurface advection. Prior hydrogeological studies that have used temperature-depth profiles to estimate vertical groundwater fluxes have either ignored the influence of climate change by employing steady-state analytical solutions or applied transient techniques to study temperature-depth profiles recorded at only a single point in time. Transient analyses of a single profile are predicated on the accurate determination of an unknown profile at some time in the past to form the initial condition. In this study, we use both analytical solutions and a numerical model to demonstrate that boreholes with temperature-depth profiles recorded at multiple times can be analyzed to either overcome the uncertainty associated with estimating unknown initial conditions or to form an additional check for the profile fitting. We further illustrate that the common approach of assuming a linear initial temperature-depth profile can result in significant errors for groundwater flux estimates. Profiles obtained from a borehole in the Veluwe area, Netherlands in both 1978 and 2016 are analyzed for an illustrative example. Since many temperature-depth profiles were collected in the late 1970s and 1980s, these previously profiled boreholes represent a significant and underexploited opportunity to obtain repeat measurements that can be used for similar analyses at other sites around the world.
NASA Technical Reports Server (NTRS)
Nigro, N. J.; Elkouh, A. F.
1975-01-01
The attitude of the balloon system is determined as a function of time if: (a) a method for simulating the motion of the system is available, and (b) the initial state is known. The initial state is obtained by fitting the system motion (as measured by sensors) to the corresponding output predicted by the mathematical model. In the case of the LACATE experiment the sensors consisted of three orthogonally oriented rate gyros and a magnetometer all mounted on the research platform. The initial state was obtained by fitting the angular velocity components measured with the gyros to the corresponding values obtained from the solution of the math model. A block diagram illustrating the attitude determination process employed for the LACATE experiment is shown. The process consists of three essential parts; a process for simulating the balloon system, an instrumentation system for measuring the output, and a parameter estimation process for systematically and efficiently solving the initial state. Results are presented and discussed.
Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino
2018-02-22
CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.
Conservation laws with coinciding smooth solutions but different conserved variables
NASA Astrophysics Data System (ADS)
Colombo, Rinaldo M.; Guerra, Graziano
2018-04-01
Consider two hyperbolic systems of conservation laws in one space dimension with the same eigenvalues and (right) eigenvectors. We prove that solutions to Cauchy problems with the same initial data differ at third order in the total variation of the initial datum. As a first application, relying on the classical Glimm-Lax result (Glimm and Lax in Decay of solutions of systems of nonlinear hyperbolic conservation laws. Memoirs of the American Mathematical Society, No. 101. American Mathematical Society, Providence, 1970), we obtain estimates improving those in Saint-Raymond (Arch Ration Mech Anal 155(3):171-199, 2000) on the distance between solutions to the isentropic and non-isentropic inviscid compressible Euler equations, under general equations of state. Further applications are to the general scalar case, where rather precise estimates are obtained, to an approximation by Di Perna of the p-system and to a traffic model.
Simplified data reduction methods for the ECT test for mode 3 interlaminar fracture toughness
NASA Technical Reports Server (NTRS)
Li, Jian; Obrien, T. Kevin
1995-01-01
Simplified expressions for the parameter controlling the load point compliance and strain energy release rate were obtained for the Edge Crack Torsion (ECT) specimen for mode 3 interlaminar fracture toughness. Data reduction methods for mode 3 toughness based on the present analysis are proposed. The effect of the transverse shear modulus, G(sub 23), on mode 3 interlaminar fracture toughness characterization was evaluated. Parameters influenced by the transverse shear modulus were identified. Analytical results indicate that a higher value of G(sub 23) results in a low load point compliance and lower mode 3 toughness estimation. The effect of G(sub 23) on the mode 3 toughness using the ECT specimen is negligible when an appropriate initial delamination length is chosen. A conservative estimation of mode 3 toughness can be obtained by assuming G(sub 23) = G(sub 12) for any initial delamination length.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
Spectral estimates of intercepted solar radiation by corn and soybean canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Brooks, C. C.; Daughtry, C. S. T.; Bauer, M. E.; Vanderbilt, V. C.
1982-01-01
Attention is given to the development of methods for combining spectral and meteorological data in crop yield models which are capable of providing accurate estimates of crop condition and yields throughout the growing season. The present investigation is concerned with initial tests of these concepts using spectral and agronomic data acquired in controlled experiments. The data were acquired at the Purdue University Agronomy Farm, 10 km northwest of West Lafayette, Indiana. Data were obtained throughout several growing seasons for corn and soybeans. Five methods or models for predicting yields were examined. On the basis of the obtained results, it is concluded that estimating intercepted solar radiation using spectral data is a viable approach for merging spectral and meteorological data in crop yield models.
NASA Astrophysics Data System (ADS)
Kompany-Zareh, Mohsen; Khoshkam, Maryam
2013-02-01
This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.
Vision-Based SLAM System for Unmanned Aerial Vehicles
Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni
2016-01-01
The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
A comparison of tools for remotely estimating leaf area index in loblolly pine plantations
Janet C. Dewey; Scott D. Roberts; Isobel Hartley
2006-01-01
Light interception is critical to forest growth and is largely determined by foliage area per unit ground, the measure of which is leaf area index (LAI). Summer and winter LAI estimates were obtained in a 17-year-old loblolly pine (Pinus taeda L.) spacing trial in Mississippi, using three replications with initial spacings of 1.5, 2.4, and 3.0 m....
NASA Astrophysics Data System (ADS)
Conde, M. M.; Rovere, M.; Gallo, P.
2017-12-01
An exhaustive study by molecular dynamics has been performed to analyze the factors that enhance the precision of the technique of direct coexistence for a system of ice and liquid water. The factors analyzed are the stochastic nature of the method, the finite size effects, and the influence of the initial ice configuration used. The results obtained show that the precision of estimates obtained through the technique of direct coexistence is markedly affected by the effects of finite size, requiring systems with a large number of molecules to reduce the error bar of the melting point. This increase in size causes an increase in the simulation time, but the estimate of the melting point with a great accuracy is important, for example, in studies on the ice surface. We also verified that the choice of the initial ice Ih configuration with different proton arrangements does not significantly affect the estimate of the melting point. Importantly this study leads us to estimate the melting point at ambient pressure of two of the most popular models of water, TIP4P/2005 and TIP4P/Ice, with the greatest precision to date.
The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment
NASA Technical Reports Server (NTRS)
Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.
1990-01-01
The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.
NASA Astrophysics Data System (ADS)
Khwaja, Tariq S.; Mazhar, Mohsin Ali; Niazi, Haris Khan; Reza, Syed Azer
2017-06-01
In this paper, we present the design of a proposed optical rangefinder to determine the distance of a semi-reflective target from the sensor module. The sensor module deploys a simple Tunable Focus Lens (TFL), a Laser Source (LS) with a Gaussian Beam profile and a digital beam profiler/imager to achieve its desired operation. We show that, owing to the nature of existing measurement methodologies, previous attempts to use a simple TFL in prior art to estimate target distance mostly deliver "one-shot" distance measurement estimates instead of obtaining and using a larger dataset which can significantly reduce the effect of some largely incorrect individual data points on the final distance estimate. Using a measurement dataset and calculating averages also helps smooth out measurement errors in individual data points through effectively low-pass filtering unexpectedly odd measurement offsets in individual data points. In this paper, we show that a simple setup deploying an LS, a TFL and a beam profiler or imager is capable of delivering an entire measurement dataset thus effectively mitigating the effects on measurement accuracy which are associated with "one-shot" measurement techniques. The technique we propose allows a Gaussian Beam from an LS to pass through the TFL. Tuning the focal length of the TFL results in altering the spot size of the beam at the beam imager plane. Recording these different spot radii at the plane of the beam profiler for each unique setting of the TFL provides us with a means to use this measurement dataset to obtain a significantly improved estimate of the target distance as opposed to relying on a single measurement. We show that an iterative least-squares curve-fit on the recorded data allows us to estimate distances of remote objects very precisely. We also show that using some basic ray-optics-based approximations, we also obtain an initial seed value for distance estimate and subsequently use this value to obtain a more precise estimate through an iterative residual reduction in the least-squares sense. In our experiments, we use a MEMS-based Digital Micro-mirror Device (DMD) as a beam imager/profiler as it delivers an accurate estimate of a Gaussian Beam profile. The proposed method, its working and the distance estimation methodology are discussed in detail. For a proof-of-concept, we back our claims with initial experimental results.
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
On curve and surface stretching in turbulent flow
NASA Technical Reports Server (NTRS)
Etemadi, Nassrollah
1989-01-01
Cocke (1969) proved that in incompressible, isotropic turbulence the average material line (material surface) elements increase in comparison with their initial values. Good estimates of how much they increase in terms of the eigenvalues of the Green deformation tensor were rigorously obtained.
Model-Based, Noninvasive Monitoring of Intracranial Pressure
2012-10-01
nICP) estimate requires simultaneous measurement of the waveforms of arterial blood pressure ( ABP ), obtained via radial artery catheter or finger...initial database comprises subarachnoid hemorrhage patients in neuro-intensive care at our partner hospital, for whom ICP, ABP and CBFV are currently
Age of smoking initiation among adolescents in Africa.
Veeranki, Sreenivas P; John, Rijo M; Ibrahim, Abdallah; Pillendla, Divya; Thrasher, James F; Owusu, Daniel; Ouma, Ahmed E O; Mamudu, Hadii M
2017-01-01
To estimate prevalence and identify correlates of age of smoking initiation among adolescents in Africa. Data (n = 16,519) were obtained from nationally representative Global Youth Tobacco Surveys in nine West African countries. Study outcome was adolescents' age of smoking initiation categorized into six groups: ≤7, 8 or 9, 10 or 11, 12 or 13, 14 or 15 and never-smoker. Explanatory variables included sex, parental or peer smoking behavior, exposure to tobacco industry promotions, and knowledge about smoking harm. Weighted multinomial logit models were conducted to determine correlates associated with adolescents' age of smoking initiation. Age of smoking initiation was as early as ≤7 years; prevalence estimates ranged from 0.7 % in Ghana at 10 or 11 years age to 9.6 % in Cote d'Ivoire at 12 or 13 years age. Males, exposures to parental or peer smoking, and industry promotions were identified as significant correlates. West African policymakers should adopt a preventive approach consistent with the World Health Organization Framework Convention on Tobacco Control to prevent an adolescent from initiating smoking and developing into future regular smokers.
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
NASA Technical Reports Server (NTRS)
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
Global solutions and finite time blow-up for fourth order nonlinear damped wave equation
NASA Astrophysics Data System (ADS)
Xu, Runzhang; Wang, Xingchang; Yang, Yanbing; Chen, Shaohua
2018-06-01
In this paper, we study the initial boundary value problem and global well-posedness for a class of fourth order wave equations with a nonlinear damping term and a nonlinear source term, which was introduced to describe the dynamics of a suspension bridge. The global existence, decay estimate, and blow-up of solution at both subcritical (E(0) < d) and critical (E(0) = d) initial energy levels are obtained. Moreover, we prove the blow-up in finite time of solution at the supercritical initial energy level (E(0) > 0).
Inertial sensor-based smoother for gait analysis.
Suh, Young Soo
2014-12-17
An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
Krill herd and piecewise-linear initialization algorithms for designing Takagi-Sugeno systems
NASA Astrophysics Data System (ADS)
Hodashinsky, I. A.; Filimonenko, I. V.; Sarin, K. S.
2017-07-01
A method for designing Takagi-Sugeno fuzzy systems is proposed which uses a piecewiselinear initialization algorithm for structure generation and a metaheuristic krill herd algorithm for parameter optimization. The obtained systems are tested against real data sets. The influence of some parameters of this algorithm on the approximation accuracy is analyzed. Estimates of the approximation accuracy and the number of fuzzy rules are compared with four known methods of design.
Kim, Kyungmok; Lee, Jaewook
2016-01-01
This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1991-01-01
Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.
Estimating discharge in rivers using remotely sensed hydraulic information
Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.
2005-01-01
A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.
PHYSICAL COAL-CLEANING/FLUE GAS DESULFURIZATION COMPUTER MODEL
The model consists of four programs: (1) one, initially developed by Battell-Columbus Laboratories, obtained from Versar, Inc.; (2) one developed by TVA; and (3,4) two developed by TVA and Bechtel National, Inc. The model produces design performance criteria and estimates of capi...
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
NASA Astrophysics Data System (ADS)
Bhattacharjya, Rajib Kumar
2018-05-01
The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.
Assessment of simulated high-dose partial-body irradiation by PCC-R assay.
Romero, Ivonne; García, Omar; Lamadrid, Ana I; Gregoire, Eric; González, Jorge E; Morales, Wilfredo; Martin, Cécile; Barquinero, Joan-Francesc; Voisin, Philippe
2013-09-01
The estimation of the dose and the irradiated fraction of the body is important information in the primary medical response in case of a radiological accident. The PCC-R assay has been developed for high-dose estimations, but little attention has been given to its applicability for partial-body irradiations. In the present work we estimated the doses and the percentage of the irradiated fraction in simulated partial-body radiation exposures at high doses using the PCC-R assay. Peripheral whole blood of three healthy donors was exposed to doses from 0-20 Gy, with ⁶⁰Co gamma radiation. To simulate partial body irradiations, irradiated and non-irradiated blood was mixed to obtain proportions of irradiated blood from 10-90%. Lymphocyte cultures were treated with Colcemid and Calyculin-A before harvest. Conventional and triage scores were performed for each dose, proportion of irradiated blood and donor. The Papworth's u test was used to evaluate the PCC-R distribution per cell. A dose-response relationship was fitted according to the maximum likelihood method using the frequencies of PCC-R obtained from 100% irradiated blood. The dose to the partially irradiated blood was estimated using the Contaminated Poisson method. A new D₀ value of 10.9 Gy was calculated and used to estimate the initial fraction of irradiated cells. The results presented here indicate that by PCC-R it is possible to distinguish between simulated partial- and whole-body irradiations by the u-test, and to accurately estimate the dose from 10-20 Gy, and the initial fraction of irradiated cells in the interval from 10-90%.
Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components
NASA Technical Reports Server (NTRS)
Reyes, A. L.; Campbell, J. E.
1976-01-01
The heat resistance of Bacillus subtilis var. niger was measured from 85 to 125 C using moisture levels of % RH or = 0.001 to 100. Curves are presented which characterize thermal destruction using thermal death times defined as F values at a given combination of three moisture and temperature conditions. The times required at 100 C for reductions of 99.99% of the initial population were estimated for the three moisture conditions. The linear model (from which estimates of D are obtained) was satisfactory for estimating thermal death times (% RH or = 0.07) in the plate count range. Estimates based on observed thermal death times and D values for % RH = 100 diverged so that D values generally gave a more conservative estimate over the temperature range 90 to 125 C. Estimates of Z sub F and Z sub L ranged from 32.1 to 58.3 C for % RH of or = 0.07 and 100. A Z sub D = 30.0 was obtained for data observed at % RH or = 0.07.
Antonarakis, Alexander S; Saatchi, Sassan S; Chazdon, Robin L; Moorcroft, Paul R
2011-06-01
Insights into vegetation and aboveground biomass dynamics within terrestrial ecosystems have come almost exclusively from ground-based forest inventories that are limited in their spatial extent. Lidar and synthetic-aperture Radar are promising remote-sensing-based techniques for obtaining comprehensive measurements of forest structure at regional to global scales. In this study we investigate how Lidar-derived forest heights and Radar-derived aboveground biomass can be used to constrain the dynamics of the ED2 terrestrial biosphere model. Four-year simulations initialized with Lidar and Radar structure variables were compared against simulations initialized from forest-inventory data and output from a long-term potential-vegtation simulation. Both height and biomass initializations from Lidar and Radar measurements significantly improved the representation of forest structure within the model, eliminating the bias of too many large trees that arose in the potential-vegtation-initialized simulation. The Lidar and Radar initializations decreased the proportion of larger trees estimated by the potential vegetation by approximately 20-30%, matching the forest inventory. This resulted in improved predictions of ecosystem-scale carbon fluxes and structural dynamics compared to predictions from the potential-vegtation simulation. The Radar initialization produced biomass values that were 75% closer to the forest inventory, with Lidar initializations producing canopy height values closest to the forest inventory. Net primary production values for the Radar and Lidar initializations were around 6-8% closer to the forest inventory. Correcting the Lidar and Radar initializations for forest composition resulted in improved biomass and basal-area dynamics as well as leaf-area index. Correcting the Lidar and Radar initializations for forest composition and fine-scale structure by combining the remote-sensing measurements with ground-based inventory data further improved predictions, suggesting that further improvements of structural and carbon-flux metrics will also depend on obtaining reliable estimates of forest composition and accurate representation of the fine-scale vertical and horizontal structure of plant canopies.
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Demarest, Stefaan; Molenberghs, Geert; Van der Heyden, Johan; Gisle, Lydia; Van Oyen, Herman; de Waleffe, Sandrine; Van Hal, Guido
2017-11-01
Substitution of non-participating households is used in the Belgian Health Interview Survey (BHIS) as a method to obtain the predefined net sample size. Yet, possible effects of applying substitution on response rates and health estimates remain uncertain. In this article, the process of substitution with its impact on response rates and health estimates is assessed. The response rates (RR)-both at household and individual level-according to the sampling criteria were calculated for each stage of the substitution process, together with the individual accrual rate (AR). Unweighted and weighted health estimates were calculated before and after applying substitution. Of the 10,468 members of 4878 initial households, 5904 members (RRind: 56.4%) of 2707 households (RRhh: 55.5%) participated. For the three successive (matched) substitutes, the RR dropped to 45%. The composition of the net sample resembles the one of the initial samples. Applying substitution did not produce any important distorting effects on the estimates. Applying substitution leads to an increase in non-participation, but does not impact the estimations.
Cropotova, Janna; Tylewicz, Urszula; Cocci, Emiliano; Romani, Santina; Dalla Rosa, Marco
2016-03-01
The aim of the present study was to estimate the quality deterioration of apple fillings during storage. Moreover, a potentiality of novel time-saving and non-invasive method based on fluorescence microscopy for prompt ascertainment of non-enzymatic browning initiation in fruit fillings was investigated. Apple filling samples were obtained by mixing different quantities of fruit and stabilizing agents (inulin, pectin and gellan gum), thermally processed and stored for 6-month. The preservation of antioxidant capacity (determined by DPPH method) in apple fillings was indirectly correlated with decrease in total polyphenols content that varied from 34±22 to 56±17% and concomitant accumulation of 5-hydroxymethylfurfural (HMF), ranging from 3.4±0.1 to 8±1mg/kg in comparison to initial apple puree values. The mean intensity of the fluorescence emission spectra of apple filling samples and initial apple puree was highly correlated (R(2)>0.95) with the HMF content, showing a good potentiality of fluorescence microscopy method to estimate non-enzymatic browning. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)
2002-01-01
This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.
USDA-ARS?s Scientific Manuscript database
The phylogenetic diversity of true morels (Morchella) in China was estimated by initially analyzing nuclear ribosomal internal transcribed spacer (ITS) rDNA sequences from 361 specimens collected in 21 provinces during the 2003-2011 growing seasons, together with six collections obtained on loan fro...
Quantum critical environment assisted quantum magnetometer
NASA Astrophysics Data System (ADS)
Jaseem, Noufal; Omkar, S.; Shaji, Anil
2018-04-01
A central qubit coupled to an Ising ring of N qubits, operating close to a critical point is investigated as a potential precision quantum magnetometer for estimating an applied transverse magnetic field. We compute the quantum Fisher information for the central, probe qubit with the Ising chain initialized in its ground state or in a thermal state. The non-unitary evolution of the central qubit due to its interaction with the surrounding Ising ring enhances the accuracy of the magnetic field measurement. Near the critical point of the ring, Heisenberg-like scaling of the precision in estimating the magnetic field is obtained when the ring is initialized in its ground state. However, for finite temperatures, the Heisenberg scaling is limited to lower ranges of N values.
Determination of HART I Blade Structural Properties by Laboratory Testing
NASA Technical Reports Server (NTRS)
Jung, Sung N.; Lau, Benton H.
2012-01-01
The structural properties of higher harmonic Aeroacoustic Rotor Test (HART I) blades were measured using the original set of blades tested in the German-dutch wind tunnel (DNW) in 1994. the measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. the measured properties were compared to the estimated values obtained initially from the blade manufacturer. The previously estimated blade properties showed consistently higher stiffness, up to 30 percent for the flap bending in the blade inboard root section.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-04-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-06-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
New Theory for Tsunami Propagation and Estimation of Tsunami Source Parameters
NASA Astrophysics Data System (ADS)
Mindlin, I. M.
2007-12-01
In numerical studies based on the shallow water equations for tsunami propagation, vertical accelerations and velocities within the sea water are neglected, so a tsunami is usually supposed to be produced by an initial free surface displacement in the initially still sea. In the present work, new theory for tsunami propagation across the deep sea is discussed, that accounts for the vertical accelerations and velocities. The theory is based on the solutions for the water surface displacement obtained in [Mindlin I.M. Integrodifferential equations in dynamics of a heavy layered liquid. Moscow: Nauka*Fizmatlit, 1996 (Russian)]. The solutions are valid when horizontal dimensions of the initially disturbed area in the sea surface are much larger than the vertical displacement of the surface, which applies to the earthquake tsunamis. It is shown that any tsunami is a combination of specific basic waves found analytically (not superposition: the waves are nonlinear), and consequently, the tsunami source (i.e., the initially disturbed body of water) can be described by the numerable set of the parameters involved in the combination. Thus the problem of theoretical reconstruction of a tsunami source is reduced to the problem of estimation of the parameters. The tsunami source can be modelled approximately with the use of a finite number of the parameters. Two-parametric model is discussed thoroughly. A method is developed for estimation of the model's parameters using the arrival times of the tsunami at certain locations, the maximum wave-heights obtained from tide gauge records at the locations, and the distances between the earthquake's epicentre and each of the locations. In order to evaluate the practical use of the theory, four tsunamis of different magnitude occurred in Japan are considered. For each of the tsunamis, the tsunami energy (E below), the duration of the tsunami source formation T, the maximum water elevation in the wave originating area H, mean radius of the area R, and the average magnitude of the sea surface displacement at the margin of the wave originating area h are estimated using tide gauges records. The results are compared (and, in the author's opinion, are in line) with the estimates known in the literature. Compared to the methods employed in the literature, there is no need to use bathymetry (and, consequently, refraction diagrams) for the estimations. The present paper follows closely earlier works [Mindlin I.M., 1996; Mindlin I.M. J. Appl. Math. Phys. (ZAMP), 2004, vol.55, pp. 781-799] and adds to their theoretical results. Example. The Hiuganada earthquake of 1968, April, 1, 9h 42m JST. A tsunami of moderate size arrived at the coast of the south-western part of Shikoku and the eastern part of Kyushu, Japan. Tsunami parameters listed above are estimated with the theory being discussed for two models of tsunami generation: (a) by initial free surface displacement (the case for numerical studies): E=1.91· 1012J, R=22km, h=17.2cm; and (b) by a sudden change in the velocity field of initially still water: E=8.78· 1012J, R=20.4km, h=9.2cm. These values are in line with known estimates [Soloviev S.L., Go Ch.N. Catalogue of tsunami in the West of Pacific Ocean. Moscow, 1974]: E=1.3· 1013J (attributed to Hatori), E=(1.4 - 2.2)· 1012J (attributed to Aida), R=21.2km, h=20cm [Hatory T., Bull. Earthq. Res. Inst., Tokyo Univ., 1969, vol. 47, pp. 55-63]. Also, estimates are obtained for the values that could not be found based on shallow water wave theory: (a) H=3.43m and (b) H=1.38m, T=16.4sec.
Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms
NASA Astrophysics Data System (ADS)
Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid
2017-09-01
The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.
The Integrated Sensor System Data Enhancement Package
NASA Technical Reports Server (NTRS)
Trankle, T. L.; Reed, W. B.; Rabin, U.; Vincent, J.
1983-01-01
The purpose of the Integrated Sensor System (ISS) Data Enhancement Package (DEP) is to improve the accuracies of the data obtained from the inflight tests performed on aircraft. The DEP is a microprocessor-based, flight-qualified electronics package that assimilates data from a Ring Laser Gyro (RGL) system, a standard NASA air data package, and other inputs. The DEP then processes these inputs in real-time to obtain optimal estimates of the aircraft velocity, attitude, and altitude. These estimates can be passed to the flight crew, downlinked, and/or stored on a mass storage medium. The DEP is now being built for the NASA Dryden Flight Research Center. Completion is anticipated in early 1984. A primary use of the ISS/DEP will be for the collection of quality data for the estimation of aircraft aerodynamic coefficients, including stability derivatives, using system identification methods. Initial anticipated applications will be on the AV-8B, F-14, and X-29 test aircraft.
Application of Biological Simulation Models in Estimating Feed Efficiency of Finishing Steers
USDA-ARS?s Scientific Manuscript database
Data on individual daily feed intake, bi-weekly BW, and carcass composition were obtained on 1,212 crossbred steers. Within animal regressions of cumulative feed intake and BW on linear and quadratic days on feed were used to quantify initial and ending BW, average daily feed intake (OFI) and ADG o...
NASA Technical Reports Server (NTRS)
Vukovich, F. M. (Principal Investigator)
1982-01-01
Infrared and visible HCMM data were used to examine the potential application of these data to define initial and boundary conditions for mesoscale numerical models. Various boundary layer models were used to calculate the distribution of the surface heat flux, specific humidity depression (the difference between the specific humidity in the air at approxmately the 10 m level and the specific humidity at the ground), and the eddy vicosity in a 72 km by 72 km area centered about St. Louis, Missouri. Various aspects of the implications of the results on the meteorology of St. Louis are discussed. Overall, the results indicated that a reasonable estimate of the surface heat flux, urban albedo, ground temperature, and specific humidity depression can be obtained using HCMM satellite data. Values of the ground-specific humidity can be obtained if the distribution of the air-specific humidity is available. More research is required in estimating the absolute magnitude of the specific humidity depression because calculations may be sensitive to model parameters.
Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?
NASA Astrophysics Data System (ADS)
Rodríguez-Díaz, L.; Gómez-Gesteira, M.
2017-12-01
European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.
Relaxation limit of a compressible gas-liquid model with well-reservoir interaction
NASA Astrophysics Data System (ADS)
Solem, Susanne; Evje, Steinar
2017-02-01
This paper deals with the relaxation limit of a two-phase compressible gas-liquid model which contains a pressure-dependent well-reservoir interaction term of the form q (P_r - P) where q>0 is the rate of the pressure-dependent influx/efflux of gas, P is the (unknown) wellbore pressure, and P_r is the (known) surrounding reservoir pressure. The model can be used to study gas-kick flow scenarios relevant for various wellbore operations. One extreme case is when the wellbore pressure P is largely dictated by the surrounding reservoir pressure P_r. Formally, this model is obtained by deriving the limiting system as the relaxation parameter q in the full model tends to infinity. The main purpose of this work is to understand to what extent this case can be represented by a well-defined mathematical model for a fixed global time T>0. Well-posedness of the full model has been obtained in Evje (SIAM J Math Anal 45(2):518-546, 2013). However, as the estimates for the full model are dependent on the relaxation parameter q, new estimates must be obtained for the equilibrium model to ensure existence of solutions. By means of appropriate a priori assumptions and some restrictions on the model parameters, necessary estimates (low order and higher order) are obtained. These estimates that depend on the global time T together with smallness assumptions on the initial data are then used to obtain existence of solutions in suitable Sobolev spaces.
Coldman, Andrew; Phillips, Norm
2013-07-09
There has been growing interest in the overdiagnosis of breast cancer as a result of mammography screening. We report incidence rates in British Columbia before and after the initiation of population screening and provide estimates of overdiagnosis. We obtained the numbers of breast cancer diagnoses from the BC Cancer Registry and screening histories from the Screening Mammography Program of BC for women aged 30-89 years between 1970 and 2009. We calculated age-specific rates of invasive breast cancer and ductal carcinoma in situ. We compared these rates by age, calendar period and screening participation. We obtained 2 estimates of overdiagnosis from cumulative cancer rates among women between the ages of 40 and 89 years: the first estimate compared participants with nonparticipants; the second estimate compared observed and predicted population rates. We calculated participation-based estimates of overdiagnosis to be 5.4% for invasive disease alone and 17.3% when ductal carcinoma in situ was included. The corresponding population-based estimates were -0.7% and 6.7%. Participants had higher rates of invasive cancer and ductal carcinoma in situ than nonparticipants but lower rates after screening stopped. Population incidence rates for invasive cancer increased after 1980; by 2009, they had returned to levels similar to those of the 1970s among women under 60 years of age but remained elevated among women 60-79 years old. Rates of ductal carcinoma in situ increased in all age groups. The extent of overdiagnosis of invasive cancer in our study population was modest and primarily occurred among women over the age of 60 years. However, overdiagnosis of ductal carcinoma in situ was elevated for all age groups. The estimation of overdiagnosis from observational data is complex and subject to many influences. The use of mammography screening in older women has an increased risk of overdiagnosis, which should be considered in screening decisions.
NASA Astrophysics Data System (ADS)
Omar, Mahmoud A.; Badr El-Din, Khalid M.; Salem, Hesham; Abdelmageed, Osama H.
2018-03-01
A simple, selective and sensitive kinetic spectrophotometric method was described for estimation of four phenolic sympathomimetic drugs namely; terbutaline sulfate, fenoterol hydrobromide, isoxsuprine hydrochloride and etilefrine hydrochloride. This method is depended on the oxidation of the phenolic drugs with Folin-Ciocalteu reagent in presence of sodium carbonate. The rate of color development at 747-760 nm was measured spectrophotometrically. The experimental parameters controlling the color development were fully studied and optimized. The reaction mechanism for color development was proposed. The calibration graphs for both the initial rate and fixed time methods were constructed, where linear correlations were found in the general concentration ranges of 3.65 × 10- 6-2.19 × 10- 5 mol L- 1 and 2-24.0 μg mL- 1 with correlation coefficients in the following range 0.9992-0.9999, 0.9991-0.9998 respectively. The limits of detection and quantitation for the initial rate and fixed time methods were found to be in general concentration range 0.109-0.273, 0.363-0.910 and 0.210-0.483, 0.700-1.611 μg mL- 1 respectively. The developed method was validated according to ICH and USP 30 -NF 25 guidelines. The suggested method was successfully implemented to the estimation of these drugs in their commercial pharmaceutical formulations and the recovery percentages obtained were ranged from 97.63% ± 1.37 to 100.17% ± 0.95 and 97.29% ± 0.74 to 100.14 ± 0.81 for initial rate and fixed time methods respectively. The data obtained from the analysis of dosage forms were compared with those obtained by reported methods. Statistical analysis of these results indicated no significant variation in the accuracy and precision of both the proposed and reported methods.
NASA Technical Reports Server (NTRS)
Cole, Stuart K.; Reeves, John D.; Williams-Byrd, Julie A.; Greenberg, Marc; Comstock, Doug; Olds, John R.; Wallace, Jon; DePasquale, Dominic; Schaffer, Mark
2013-01-01
NASA is investing in new technologies that include 14 primary technology roadmap areas, and aeronautics. Understanding the cost for research and development of these technologies and the time it takes to increase the maturity of the technology is important to the support of the ongoing and future NASA missions. Overall, technology estimating may help provide guidance to technology investment strategies to help improve evaluation of technology affordability, and aid in decision support. The research provides a summary of the framework development of a Technology Estimating process where four technology roadmap areas were selected to be studied. The framework includes definition of terms, discussion for narrowing the focus from 14 NASA Technology Roadmap areas to four, and further refinement to include technologies, TRL range of 2 to 6. Included in this paper is a discussion to address the evaluation of 20 unique technology parameters that were initially identified, evaluated and then subsequently reduced for use in characterizing these technologies. A discussion of data acquisition effort and criteria established for data quality are provided. The findings obtained during the research included gaps identified, and a description of a spreadsheet-based estimating tool initiated as a part of the Technology Estimating process.
NASA Astrophysics Data System (ADS)
Jaiswal, P.; van Westen, C. J.; Jetten, V.
2011-06-01
A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.
Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies that are implemented in just a few locations. PMID:26173108
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies that are implemented in just a few locations.
Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor
NASA Astrophysics Data System (ADS)
Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.
2017-05-01
This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.
Analysis of positron lifetime spectra in polymers
NASA Technical Reports Server (NTRS)
Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.
1988-01-01
A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.
NASA Technical Reports Server (NTRS)
Dehoff, R. L.; Reed, W. B.; Trankle, T. L.
1977-01-01
The development and validation of a spey engine model is described. An analysis of the dynamical interactions involved in the propulsion unit is presented. The model was reduced to contain only significant effects, and was used, in conjunction with flight data obtained from an augmentor wing jet STOL research aircraft, to develop initial estimates of parameters in the system. The theoretical background employed in estimating the parameters is outlined. The software package developed for processing the flight data is described. Results are summarized.
Numerical solution of the stochastic parabolic equation with the dependent operator coefficient
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashyralyev, Allaberen; Department of Mathematics, ITTU, Ashgabat; Okur, Ulker
2015-09-18
In the present paper, a single step implicit difference scheme for the numerical solution of the stochastic parabolic equation with the dependent operator coefficient is presented. Theorem on convergence estimates for the solution of this difference scheme is established. In applications, this abstract result permits us to obtain the convergence estimates for the solution of difference schemes for the numerical solution of initial boundary value problems for parabolic equations. The theoretical statements for the solution of this difference scheme are supported by the results of numerical experiments.
NASA Astrophysics Data System (ADS)
Li, Lu; Narayanan, Ramakrishnan; Miller, Steve; Shen, Feimo; Barqawi, Al B.; Crawford, E. David; Suri, Jasjit S.
2008-02-01
Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume effect and patient motion during image scans, which are all inherent in medical ultrasound imaging. The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour obtained. The volume of prostate is estimated with the segmentation results. Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of 7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The overall system, which was developed using Microsoft Visual C++, is real-time and accurate.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
NASA Astrophysics Data System (ADS)
Deng, Shuxian; Ge, Xinxin
2017-10-01
Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.
Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.
Hui, Zhuo; Sankaranarayanan, Aswin C
2017-10-01
This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.
Zee, Jarcy; Xie, Sharon X.
2015-01-01
Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510
The critical proportion of immune individuals needed to control hepatitis B
NASA Astrophysics Data System (ADS)
Ospina, Juan; Hincapié-Palacio, Doracelly
2016-05-01
We estimate the critical proportion of immunity (Pc) to control hepatitis B in Medellin - Colombia, based on a random population survey of 2077 individuals of 6-64 years of age. The force of infection (Fi) was estimated according to empirical data of susceptibility by age S(a), assuming a quadratic expression. Parameters were estimated by adjusting data to a nonlinear regression. Fi was defined by -(ds(a)/da)/s(a) and according to the form of the empirical curve S(a) we assume a quadratic expression given by S(a)= Ea2+Ba+C. Then we have the explicit expression for the accumulated Fi by age given by F(a) = -a(Ea+B)/c. The expression of average infection age A is obtained as A = L + EL3/(3C)+BL2/(2C) and the basic reproductive number R0 is obtained as R0 = 1 + 6C/(6C+2EL2+3BL). From the las result we obtain the Pc given by Pc= 6C/(12C+2EL2+3BL). Numerical simulations were performed with the age-susceptibility proportion and initial values (a=0.02, b=20, c=100), obtaining an adjusted coefficient of multiple determination of 64.83%. According to the best estimate, the algebraic expressions for S(a) and the Fi were derived. Using the result of Fi, we obtain A = 30, L =85; R0 CI 95%: 1.42 - 1.64 and Pc: 0-0.29. These results indicate that at the worst case, to maintain control of the disease should be immunes at least 30% of susceptible individuals. Similar results were obtained by sex and residential area.
Image registration based on subpixel localization and Cauchy-Schwarz divergence
NASA Astrophysics Data System (ADS)
Ge, Yongxin; Yang, Dan; Zhang, Xiaohong; Lu, Jiwen
2010-07-01
We define a new matching metric-corner Cauchy-Schwarz divergence (CCSD) and present a new approach based on the proposed CCSD and subpixel localization for image registration. First, we detect the corners in an image by a multiscale Harris operator and take them as initial interest points. And then, a subpixel localization technique is applied to determine the locations of the corners and eliminate the false and unstable corners. After that, CCSD is defined to obtain the initial matching corners. Finally, we use random sample consensus to robustly estimate the parameters based on the initial matching. The experimental results demonstrate that the proposed algorithm has a good performance in terms of both accuracy and efficiency.
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
NASA Astrophysics Data System (ADS)
Kwon, Young-Sam; Li, Fucai
2018-03-01
In this paper we study the incompressible limit of the degenerate quantum compressible Navier-Stokes equations in a periodic domain T3 and the whole space R3 with general initial data. In the periodic case, by applying the refined relative entropy method and carrying out the detailed analysis on the oscillations of velocity, we prove rigorously that the gradient part of the weak solutions (velocity) of the degenerate quantum compressible Navier-Stokes equations converge to the strong solution of the incompressible Navier-Stokes equations. Our results improve considerably the ones obtained by Yang, Ju and Yang [25] where only the well-prepared initial data case is considered. While for the whole space case, thanks to the Strichartz's estimates of linear wave equations, we can obtain the convergence of the weak solutions of the degenerate quantum compressible Navier-Stokes equations to the strong solution of the incompressible Navier-Stokes/Euler equations with a linear damping term. Moreover, the convergence rates are also given.
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man
2015-01-01
Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.
Using Advice from Multiple Sources to Revise and Improve Judgments
ERIC Educational Resources Information Center
Yaniv, Ilan; Milyavsky, Maxim
2007-01-01
How might people revise their opinions on the basis of multiple pieces of advice? What sort of gains could be obtained from rules for using advice? In the present studies judges first provided their initial estimates for a series of questions; next they were presented with several (2, 4, or 8) opinions from an ecological pool of advisory estimates…
Managing watersheds to change water quality: lessons learned from the NIFA-CEAP watershed studies
Deanna Osmond; M. Arabi; D. Hoag; G. Jennings; D. Line; A. Luloff; M. McFarland; D. Meals; A. Sharpley
2016-01-01
The Conservation Effects Assessment Project (CEAP) is an USDA initiative that involves the Agricultural Research Service, the National Institute for Food and Agriculture (NIFA), and the Natural Resources Conservation Service. The overall goal of CEAP is to provide scientifically credible estimates of the environmental benefits obtained from USDA conservation programs...
Private School Enrollment in an Italian Region after Implementing a Change in the Voucher Policy
ERIC Educational Resources Information Center
Agasisti, Tommaso; Barbieri, Gianna; Murtinu, Samuele
2015-01-01
This article estimates the effect of an administrative change in a voucher policy implemented by an Italian Regional government. The voucher was initiated in 2000, and is intended to help families that want to enroll their children in private schools. In 2008, the policy was changed, making the administrative procedure required for obtaining the…
ERIC Educational Resources Information Center
Toutkoushian, Robert K.; Hossler, Don; DesJardins, Stephen L.; McCall, Brian; Gonzalez Canche, Manuel S.
2015-01-01
Our study adds to prior work on Indiana's Twenty-first Century Scholars(TFCS) program by focusing on whether participating in--rather than completing--the program affects the likelihood of students going to college and where they initially enrolled. We first employ binary and multinomial logistic regression to obtain estimates of the impact of the…
Physics-based coastal current tomographic tracking using a Kalman filter.
Wang, Tongchen; Zhang, Ying; Yang, T C; Chen, Huifang; Xu, Wen
2018-05-01
Ocean acoustic tomography can be used based on measurements of two-way travel-time differences between the nodes deployed on the perimeter of the surveying area to invert/map the ocean current inside the area. Data at different times can be related using a Kalman filter, and given an ocean circulation model, one can in principle now cast and even forecast current distribution given an initial distribution and/or the travel-time difference data on the boundary. However, an ocean circulation model requires many inputs (many of them often not available) and is unpractical for estimation of the current field. A simplified form of the discretized Navier-Stokes equation is used to show that the future velocity state is just a weighted spatial average of the current state. These weights could be obtained from an ocean circulation model, but here in a data driven approach, auto-regressive methods are used to obtain the time and space dependent weights from the data. It is shown, based on simulated data, that the current field tracked using a Kalman filter (with an arbitrary initial condition) is more accurate than that estimated by the standard methods where data at different times are treated independently. Real data are also examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuncarayakti, Hanindyo; Maeda, Keiichi; Doi, Mamoru
Integral field spectroscopy of 11 Type Ib/Ic supernova (SN Ib/Ic) explosion sites in nearby galaxies has been obtained using UH88/SNIFS and Gemini-N/GMOS. The use of integral field spectroscopy enables us to obtain both spatial and spectral information about the explosion site, enabling the identification of the parent stellar population of the SN progenitor star. The spectrum of the parent population provides metallicity determination via strong-line method and age estimation obtained via comparison with simple stellar population models. We adopt this information as the metallicity and age of the SN progenitor, under the assumption that it was coeval with the parentmore » stellar population. The age of the star corresponds to its lifetime, which in turn gives the estimate of its initial mass. With this method we were able to determine both the metallicity and initial (zero-age main sequence) mass of the progenitor stars of SNe Ib and Ic. We found that on average SN Ic explosion sites are more metal-rich and younger than SN Ib sites. The initial mass of the progenitors derived from parent stellar population age suggests that SN Ic has more massive progenitors than SN Ib. In addition, we also found indication that some of our SN progenitors are less massive than {approx}25 M{sub Sun }, indicating that they may have been stars in a close binary system that have lost their outer envelope via binary interactions to produce SNe Ib/Ic, instead of single Wolf-Rayet stars. These findings support the current suggestions that both binary and single progenitor channels are in effect in producing SNe Ib/Ic. This work also demonstrates the power of integral field spectroscopy in investigating SN environments and active star-forming regions.« less
The estimation of probable maximum precipitation: the case of Catalonia.
Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel
2008-12-01
A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.
Rossi, Carmine; Raboud, Janet; Walmsley, Sharon; Cooper, Curtis; Antoniou, Tony; Burchell, Ann N; Hull, Mark; Chia, Jason; Hogg, Robert S; Moodie, Erica E M; Klein, Marina B
2017-04-04
Combination antiretroviral therapy (cART) has reduced mortality from AIDS-related illnesses and chronic comorbidities have become prevalent among HIV-infected patients. We examined the association between hepatitis C virus (HCV) co-infection and chronic kidney disease (CKD) among patients initiating modern antiretroviral therapy. Data were obtained from the Canadian HIV Observational Cohort for individuals initiating cART from 2000 to 2012. Incident CKD was defined as two consecutive serum creatinine-based estimated glomerular filtration (eGFR) measurements <60 mL/min/1.73m 2 obtained ≥3 months apart. CKD incidence rates after cART initiation were compared between HCV co-infected and HIV mono-infected patients. Hazard ratios (HRs) and 95% confidence intervals (CIs) were estimated using multivariable Cox regression. We included 2595 HIV-infected patients with eGFR >60 mL/min/1.73m 2 at cART initiation, of which 19% were HCV co-infected. One hundred and fifty patients developed CKD during 10,903 person-years of follow-up (PYFU). The CKD incidence rate was higher among co-infected than HIV mono-infected patients (26.0 per 1000 PYFU vs. 10.7 per 1000 PYFU). After adjusting for demographics, virologic parameters and traditional CKD risk factors, HCV co-infection was associated with a significantly shorter time to incident CKD (HR 1.97; 95% CI: 1.33, 2.90). Additional factors associated with incident CKD were female sex, increasing age after 40 years, lower baseline eGFR below 100 mL/min/1.73m 2 , increasing HIV viral load and cumulative exposure to tenofovir and lopinavir. HCV co-infection was associated with an increased risk of incident CKD among HIV-infected patients initiating cART. HCV-HIV co-infected patients should be monitored for kidney disease and may benefit from available HCV treatments.
NASA Astrophysics Data System (ADS)
Witzany, V.; Jefremov, P.
2018-06-01
Context. When a black hole is accreting well below the Eddington rate, a geometrically thick, radiatively inefficient state of the accretion disk is established. There is a limited number of closed-form physical solutions for geometrically thick (nonselfgravitating) toroidal equilibria of perfect fluids orbiting a spinning black hole, and these are predominantly used as initial conditions for simulations of accretion in the aforementioned mode. However, different initial configurations might lead to different results and thus observational predictions drawn from such simulations. Aims: We aim to expand the known equilibria by a number of closed multiparametric solutions with various possibilities of rotation curves and geometric shapes. Then, we ask whether choosing these as initial conditions influences the onset of accretion and the asymptotic state of the disk. Methods: We have investigated a set of examples from the derived solutions in detail; we analytically estimate the growth of the magneto-rotational instability (MRI) from their rotation curves and evolve the analytically obtained tori using the 2D magneto-hydrodynamical code HARM. Properties of the evolutions are then studied through the mass, energy, and angular-momentum accretion rates. Results: The rotation curve has a decisive role in the numerical onset of accretion in accordance with our analytical MRI estimates: in the first few orbital periods, the average accretion rate is linearly proportional to the initial MRI rate in the toroids. The final state obtained from any initial condition within the studied class after an evolution of ten or more orbital periods is mostly qualitatively identical and the quantitative properties vary within a single order of magnitude. The average values of the energy of the accreted fluid have an irregular dependency on initial data, and in some cases fluid with energies many times its rest mass is systematically accreted.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Cole, Stephen R.; Lau, Bryan; Eron, Joseph J.; Brookhart, M. Alan; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.; Cole, Stephen R.; Brookhart, M. Alan; Lau, Bryan; Eron, Joseph J.; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.
2015-01-01
There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. PMID:24966220
The distribution of rotational velocities for low-mass stars in the Pleiades
NASA Technical Reports Server (NTRS)
Stauffer, John R.; Hartmann, Lee W.
1987-01-01
The available spectral type and color data for late-type Pleiades members have been reanalyzed, and new reddening estimates are obtained. New photometry for a small number of stars and a compilation of H-alpha equivalent widths for Pleiades dwarfs are presented. These data are used to examine the location of the rapid rotators in color-magnitude diagrams and the correlation between chromospheric activity and rotation. It is shown that the wide range of angular momenta exhibited by Pleiades K and M dwarfs is not necessarily produced by a combination of main-sequence spin-downs and a large age spread; it can also result from a plausible spread in initial angular momenta, coupled with initial main-sequence spin-down rates that are only weakly dependent on rotation. The new reddening estimates confirm Breger's (1985) finding of large extinctions confined to a small region in the southern portion of the Merope nebula.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Ang, Rebecca P; Chong, Wan Har; Huan, Vivien S; Yeo, Lay See
2007-01-01
This article reports the development and initial validation of scores obtained from the Adolescent Concerns Measure (ACM), a scale which assesses concerns of Asian adolescent students. In Study 1, findings from exploratory factor analysis using 619 adolescents suggested a 24-item scale with four correlated factors--Family Concerns (9 items), Peer Concerns (5 items), Personal Concerns (6 items), and School Concerns (4 items). Initial estimates of convergent validity for ACM scores were also reported. The four-factor structure of ACM scores derived from Study 1 was confirmed via confirmatory factor analysis in Study 2 using a two-fold cross-validation procedure with a separate sample of 811 adolescents. Support was found for both the multidimensional and hierarchical models of adolescent concerns using the ACM. Internal consistency and test-retest reliability estimates were adequate for research purposes. ACM scores show promise as a reliable and potentially valid measure of Asian adolescents' concerns.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.
2017-12-01
It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.
A two-step super-Gaussian independent component analysis approach for fMRI data.
Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying
2015-09-01
Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.
Some New Mathematical Methods for Variational Objective Analysis
NASA Technical Reports Server (NTRS)
Wahba, G.; Johnson, D. R.
1984-01-01
New and/or improved variational methods for simultaneously combining forecast, heterogeneous observational data, a priori climatology, and physics to obtain improved estimates of the initial state of the atmosphere for the purpose of numerical weather prediction are developed. Cross validated spline methods are applied to atmospheric data for the purpose of improved description and analysis of atmospheric phenomena such as the tropopause and frontal boundary surfaces.
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods
NASA Astrophysics Data System (ADS)
Ginibre, J.; Velo, G.
We continue the study of the initial value problem for the complex Ginzburg-Landau equation
Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts
2012-10-01
To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.
Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts
2012-01-01
Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661
Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water
Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong
2017-01-01
This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442
Colloid-Facilitated Transport of 137Cs in Fracture-Fill Material. Experiments and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dittrich, Timothy M.; Reimus, Paul William
2015-10-29
In this study, we demonstrate how a combination of batch sorption/desorption experiments and column transport experiments were used to effectively parameterize a model describing the colloid-facilitated transport of Cs in the Grimsel granodiorite/FFM system. Cs partition coefficient estimates onto both the colloids and the stationary media obtained from the batch experiments were used as initial estimates of partition coefficients in the column experiments, and then the column experiment results were used to obtain refined estimates of the number of different sorption sites and the adsorption and desorption rate constants of the sites. The desorption portion of the column breakthrough curvesmore » highlighted the importance of accounting for adsorption-desorption hysteresis (or a very nonlinear adsorption isotherm) of the Cs on the FFM in the model, and this portion of the breakthrough curves also dictated that there be at least two different types of sorption sites on the FFM. In the end, the two-site model parameters estimated from the column experiments provided excellent matches to the batch adsorption/desorption data, which provided a measure of assurance in the validity of the model.« less
Brouwer, Anne-Marie; López-Moliner, Joan; Brenner, Eli; Smeets, Jeroen B J
2006-02-01
We propose and evaluate a source of information that ball catchers may use to determine whether a ball will land behind or in front of them. It combines estimates for the ball's horizontal and vertical speed. These estimates are based, respectively, on the rate of angular expansion and vertical velocity. Our variable could account for ball catchers' data of Oudejans et al. [The effects of baseball experience on movement initiation in catching fly balls. Journal of Sports Sciences, 15, 587-595], but those data could also be explained by the use of angular expansion alone. We therefore conducted additional experiments in which we asked subjects where simulated balls would land under conditions in which both angular expansion and vertical velocity must be combined for obtaining a correct response. Subjects made systematic errors. We found evidence for the use of angular velocity but hardly any indication for the use of angular expansion. Thus, if catchers use a strategy that involves combining vertical and horizontal estimates of the ball's speed, they do not obtain their estimates of the horizontal component from the rate of expansion alone.
NASA Astrophysics Data System (ADS)
Latypov, A. F.
2008-12-01
Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.
NASA Astrophysics Data System (ADS)
Latypov, A. F.
2009-03-01
The fuel economy was estimated at boost trajectory of aerospace plane during energy supply to the free stream. Initial and final velocities of the flight were given. A model of planning flight above cold air in infinite isobaric thermal wake was used. The comparison of fuel consumption was done at optimal trajectories. The calculations were done using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was constructed in the first part of the paper for estimating the ramjet thrust and specific impulse. To estimate the aerodynamic drag of aircraft a quadratic dependence on aerodynamic lift is used. The energy for flow heating is obtained at the sacrifice of an equivalent decrease of exergy of combustion products. The dependencies are obtained for increasing the range coefficient of cruise flight at different Mach numbers. In the second part of the paper, a mathematical model is presented for the boost part of the flight trajectory of the flying vehicle and computational results for reducing the fuel expenses at the boost trajectory at a given value of the energy supplied in front of the aircraft.
Analysing Twitter and web queries for flu trend prediction.
Santos, José Carlos; Matos, Sérgio
2014-05-07
Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p<0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.
NASA Astrophysics Data System (ADS)
Berlanga, Juan M.; Harbaugh, John W.
The Tabasco region contains a number of major oilfields, including some of the emerging "giant" oil fields which have received extensive publicity. Fields in the Tabasco region are associated with large geologic structures which are detected readily by seismic surveys. The structures seem to be associated with deepseated movement of salt, and they are complexly faulted. Some structures have as much as 1000 milliseconds relief of seismic lines. A study, interpreting the structure of the area, used initially only a fraction of the total seismic lines That part of Tabasco region that has been studied was surveyed with a close-spaced rectilinear network of seismic lines. A, interpreting the structure of the area, used initially only a fraction of the total seismic data available. The purpose was to compare "predictions" of reflection time based on widely spaced seismic lines, with "results" obtained along more closely spaced lines. This process of comparison simulates the sequence of events in which a reconnaissance network of seismic lines is used to guide a succession of progressively more closely spaced lines. A square gridwork was established with lines spaced at 10 km intervals, and using machine contour maps, compared the results with those obtained with seismic grids employing spacings of 5 and 2.5 km respectively. The comparisons of predictions based on widely spaced lines with observations along closely spaced lines provide information by which an error function can be established. The error at any point can be defined as the difference between the predicted value for that point, and the subsequently observed value at that point. Residuals obtained by fitting third-degree polynomial trend surfaces were used for comparison. The root mean square of the error measurement, (expressed in seconds or milliseconds reflection time) was found to increase more or less linearly with distance from the nearest seismic point. Oil-occurrence probabilities were established on the basis of frequency distributions of trend-surface residuals obtained by fitting and subtracting polynomial trend surfaces from the machine-contoured reflection time maps. We found that there is a strong preferential relationship between the occurrence of petroleum (i.e. its presence versus absence) and particular ranges of trend-surface residual values. An estimate of the probability of oil occurring at any particular geographic point can be calculated on the basis of the estimated trend-surface residual value. This estimate, however, must be tempered by the probable error in the estimate of the residual value provided by the error function. The result, we believe, is a simple but effective procedure for estimating exploration outcome probabilities where seismic data provide the principal form of information in advance of drilling. Implicit in this approach is the comparison between a maturely explored area, for which both seismic and production data are available, and which serves as a statistical "training area", with the "target" area which is undergoing exploration and for which probability forecasts are to be calculated.
Photometric Studies of Orbital Debris at GEO
NASA Technical Reports Server (NTRS)
Seitzer, Patrick; Abercromby, Kira J.; Rodriguez-Cowardin, Heather M.; Barker, Ed; Foreman, Gary; Horstman, Matt
2009-01-01
We report on optical observations of debris at geosynchronous Earth orbit (GEO) using two telescopes simultaneously at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. The University of Michigan s 0.6/0.9-m Schmidt telescope MODEST (for Michigan Orbital DEbris Survey Telescope) was used in survey mode to find objects that potentially could be at GEO. Because GEO objects only appear in this telescope s field of view for an average of 5 minutes, a full six-parameter orbit can not be determined. Interrupting the survey for follow-up observations leads to incompleteness in the survey results. Instead, as objects are detected with MODEST, initial predictions assuming a circular orbit are done for where the object will be for the next hour, and the objects are reacquired as quickly as possible on the CTIO 0.9-m telescope. This second telescope follows-up during the first night and, if possible, over several more nights to obtain the maximum time arc possible, and the best six parameter orbit. Our goal is to obtain an initial orbit and calibrated colors for all detected objects fainter than R = 15th in order to estimate the orbital distribution of objects selected on the basis of two observational criteria: magnitude and angular rate. One objective is to estimate what fraction of objects selected on the basis of angular rate are not at GEO. A second objective is to obtain magnitudes and colors in standard astronomical filters (BVRI) for comparison with reflectance spectra of likely spacecraft materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, M.R.; Hobbs, M.L.; McGee, B.C.
Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3more » Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.« less
Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression
NASA Astrophysics Data System (ADS)
Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.
2018-05-01
Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.
Crowdsourcing urban air temperatures through smartphone battery temperatures in São Paulo, Brazil
NASA Astrophysics Data System (ADS)
Droste, Arjan; Pape, Jan-Jaap; Overeem, Aart; Leijnse, Hidde; Steeneveld, Gert-Jan; Van Delden, Aarnout; Uijlenhoet, Remko
2017-04-01
Crowdsourcing as a method to obtain and apply vast datasets is rapidly becoming prominent in meteorology, especially for urban areas where traditional measurements are scarce. Earlier studies showed that smartphone battery temperature readings allow for estimating the daily and city-wide air temperature via a straightforward heat transfer model. This study advances these model estimations by studying spatially and temporally smaller scales. The accuracy of temperature retrievals as a function of the number of battery readings is also studied. An extensive dataset of over 10 million battery temperature readings is available for São Paulo (Brazil), for estimating hourly and daily air temperatures. The air temperature estimates are validated with air temperature measurements from a WMO station, an Urban Fluxnet site, and crowdsourced data from 7 hobby meteorologists' private weather stations. On a daily basis temperature estimates are good, and we show they improve by optimizing model parameters for neighbourhood scales as categorized in Local Climate Zones. Temperature differences between Local Climate Zones can be distinguished from smartphone battery temperatures. When validating the model for hourly temperature estimates, initial results are poor, but are vastly improved by using a diurnally varying parameter function in the heat transfer model rather than one fixed value for the entire day. The obtained results show the potential of large crowdsourced datasets in meteorological studies, and the value of smartphones as a measuring platform when routine observations are lacking.
Wind estimates from cloud motions: Phase 1 of an in situ aircraft verification experiment
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Shenk, W. E.; Skillman, W.
1974-01-01
An initial experiment was conducted to verify geostationary satellite derived cloud motion wind estimates with in situ aircraft wind velocity measurements. Case histories of one-half hour to two hours were obtained for 3-10km diameter cumulus cloud systems on 6 days. Also, one cirrus cloud case was obtained. In most cases the clouds were discrete enough that both the cloud motion and the ambient wind could be measured with the same aircraft Inertial Navigation System (INS). Since the INS drift error is the same for both the cloud motion and wind measurements, the drift error subtracts out of the relative motion determinations. The magnitude of the vector difference between the cloud motion and the ambient wind at the cloud base averaged 1.2 m/sec. The wind vector at higher levels in the cloud layer differed by about 3 m/sec to 5 m/sec from the cloud motion vector.
A Functional Varying-Coefficient Single-Index Model for Functional Response Data
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2016-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540
A Functional Varying-Coefficient Single-Index Model for Functional Response Data.
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2017-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.
Bull, R J; Robinson, M; Meier, J R; Stober, J
1982-01-01
Other workers have clearly shown that most, if not all, drinking water in the U.S. contains chemicals that possess mutagenic and/or carcinogenic activity by using bacterial and in vitro methods. In the present work, increased numbers of tumors were observed with samples of organic material isolated from 5 U.S. cities administered as tumor initiators in mouse skin initiation/promotion studies. Only in one case was the result significantly different from control. In studies designed to test whether disinfection practice contributes significantly to the tumor initiating activity found in drinking water mixed results have been obtained. In one experiment, water disinfected by chlorination, ozonation or combined chlorine resulted in a significantly greater number of papillomas when compared to nondisinfected water. In two subsequent experiments, where water was obtained from the Ohio River at different times of the year, no evidence of increased initiating activity was observed with any disinfectant. Analysis of water obtained at the comparable times of the year for total organic halogen, and trihalomethane formation revealed a substantial variation in the formation of these products. Considering the problems such variability poses for estimating risks associated with disinfection by-products, a model system which makes use of commercially obtained humic acid as a substrate for chlorination was investigated using the Ames test. Humic and fulvic acids obtained from two surface waters as well as the commercially obtained humic acid were without activity in TA 1535, TA 1537, TA 1538, TA 98 or TA 100 strains of S. typhimurium. Following treatment with a 0.8 molar ratio of chlorine (based on carbon) significant mutagenic activity was observed with all humic and fulvic acid samples. Comparisons of the specific mutagenic activity of the chlorinated products suggests that the commercial material might provide a useful model for studying health hazards associated with disinfection reactions by-products. PMID:7151763
Hydrologic Engineering in Planning,
1981-04-01
through abstraction of losses 3) Transform precipitation excess to streamflow 4) Estimate other contributions in order to obtain the total runoff...similar to those of surface entry, transmission ability and storage capacity and are illustrated in Figure 4.3. The initial losses are the losses that...AVERAGE CONDITIONS LEGEND w UNIFORM LOSSES 0I SOIL TRANSMISSION RATE A NTECEDENT CONDITIONS U) -~(WET)(DY IL 0 / -J TIME TIME SOIL CHARACTERISTICS 0,0
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; ...
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
Implications of Pulser Voltage Ripple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnard, J J
In a recent set of measurements obtained by G. Kamin, W. Manning, A. Molvik, and J. Sullivan, the voltage waveform of the diode pulser had a ripple of approximately {+-}1.3% of the 65 kV flattop voltage, and the beam current had a larger corresponding ripple of approximately {+-}8.4% of the 1.5 mA average current at the location of the second Faraday cup, approximately 1.9 m downstream from the ion source. The period of the ripple was about 1 {mu}s. It was initially unclear whether this large current ripple was in fact a true measurement of the current or a spuriousmore » measurement of noise produced by the pulser electronics. The purpose of this note is to provide simulations which closely match the experimental results and thereby corroborate the physical nature of those measurements, and to provide predictions of the amplitude of the current ripples as they propagate to the end of linear transport section. Additionally analytic estimates are obtained which lend some insight into the nature of the current fluctuations and to provide an estimate of what the maximum amplitude of the current fluctuations are expected to be, and conversely what initial ripple in the voltage source is allowed, given a smaller acceptable tolerance on the line charge density.« less
Re-estimating sample size in cluster randomised trials with active recruitment within clusters.
van Schie, S; Moerbeek, M
2014-08-30
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.
Evaluation of Rotor Structural and Aerodynamic Loads using Measured Blade Properties
NASA Technical Reports Server (NTRS)
Jung, Sung N.; You, Young-Hyun; Lau, Benton H.; Johnson, Wayne; Lim, Joon W.
2012-01-01
The structural properties of Higher harmonic Aeroacoustic Rotor Test (HART I) blades have been measured using the original set of blades tested in the wind tunnel in 1994. A comprehensive rotor dynamics analysis is performed to address the effect of the measured blade properties on airloads, blade motions, and structural loads of the rotor. The measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. The measured properties are correlated against the estimated values obtained initially by the manufacturer of the blades. The previously estimated blade properties showed consistently higher stiffnesses, up to 30% for the flap bending in the blade inboard root section. The measured offset between the center of gravity and the elastic axis is larger by about 5% chord length, as compared with the estimated value. The comprehensive rotor dynamics analysis was carried out using the measured blade property set for HART I rotor with and without HHC (Higher Harmonic Control) pitch inputs. A significant improvement on blade motions and structural loads is obtained with the measured blade properties.
Left ventricular endocardial surface detection based on real-time 3D echocardiographic data
NASA Technical Reports Server (NTRS)
Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.
2001-01-01
OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.
Potential costs of breast augmentation mammaplasty.
Schmitt, William P; Eichhorn, Mitchell G; Ford, Ronald D
2016-01-01
Augmentation mammaplasty is one of the most common surgical procedures performed by plastic surgeons. The aim of this study was to estimate the cost of the initial procedure and its subsequent complications, as well as project the cost of Food and Drug Administration (FDA)-recommended surveillance imaging. The potential costs to the individual patient and society were calculated. Local plastic surgeons provided billing data for the initial primary silicone augmentation and reoperative procedures. Complication rates used for the cost analysis were obtained from the Allergen Core study on silicone implants. Imaging surveillance costs were considered in the estimations. The average baseline initial cost of silicone augmentation mammaplasty was calculated at $6335. The average total cost of primary breast augmentation over the first decade for an individual patient, including complications requiring reoperation and other ancillary costs, was calculated at $8226. Each decade thereafter cost an additional $1891. Costs may exceed $15,000 over an averaged lifetime, and the recommended implant surveillance could cost an additional $33,750. The potential cost of a breast augmentation, which includes the costs of complications and imaging, is significantly higher than the initial cost of the procedure. Level III, economic and decision analysis study. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Durán-Álvarez, Juan C; Prado-Pano, Blanca; Jiménez-Cisneros, Blanca
2012-06-01
In conventional sorption studies, the prior presence of contaminants in the soil is not considered when estimating the sorption parameters because this is only a transient state. However, this parameter should be considered in order to avoid the under/overestimation of the soil sorption capacity. In this study, the sorption of naproxen, carbamazepine and triclosan was determined in a wastewater irrigated soil, considering the initial mass of the compounds. Batch sorption-desorption tests were carried out at two soil depths (0-10 cm and 30-40 cm), using either 10 mM CaCl(2) solution or untreated wastewater as the liquid phase. Data were satisfactorily fitted to the initial mass model. For the two soils, release of naproxen and carbamazepine was observed when the CaCl(2) solution was used, but not in the soil/wastewater system. The compounds' release was higher in the topsoil than in the 30-40 cm soil. Sorption coefficients (K(d)) for CaCl(2) solution tests showed that in the topsoil, triclosan (64.9 L kg(-1)) is sorbed to a higher extent than carbamazepine and naproxen (5.81 and 2.39 L kg(-1), respectively). In the 30-40 cm soil, carbamazepine and naproxen K(d) values (11.4 and 4.41 L kg(-1), respectively) were higher than those obtained for the topsoil, while the triclosan K(d) value was significantly lower than in the topsoil (19.2 L kg(-1)). Differences in K(d) values were found when comparing the results obtained for the two liquid phases. Sorption of naproxen and carbamazepine was reversible for both soils, while sorption of triclosan was found to be irreversible. This study shows the sorption behavior of three pharmaceuticals in a wastewater irrigated soil, as well as the importance of considering the initial mass of target pollutants in the estimation of their sorption parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yiannoutsos, Constantin Theodore; Johnson, Leigh Francis; Boulle, Andrew; Musick, Beverly Sue; Gsponer, Thomas; Balestre, Eric; Law, Matthew; Shepherd, Bryan E; Egger, Matthias
2012-01-01
Objective To provide estimates of mortality among HIV-infected patients starting combination antiretroviral therapy. Methods We report on the death rates from 122 925 adult HIV-infected patients aged 15 years or older from East, Southern and West Africa, Asia Pacific and Latin America. We use two methods to adjust for biases in mortality estimation resulting from loss from follow-up, based on double-sampling methods applied to patient outreach (Kenya) and linkage with vital registries (South Africa), and apply these to mortality estimates in the other three regions. Age, gender and CD4 count at the initiation of therapy were the factors considered as predictors of mortality at 6, 12, 24 and >24 months after the start of treatment. Results Patient mortality was high during the first 6 months after therapy for all patient subgroups and exceeded 40 per 100 patient years among patients who started treatment at low CD4 count. This trend was seen regardless of region, demographic or disease-related risk factor. Mortality was under-reported by up to or exceeding 100% when comparing estimates obtained from passive monitoring of patient vital status. Conclusions Despite advances in antiretroviral treatment coverage many patients start treatment at very low CD4 counts and experience significant mortality during the first 6 months after treatment initiation. Active patient tracing and linkage with vital registries are critical in adjusting estimates of mortality, particularly in low- and middle-income settings. PMID:23172344
Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko
2012-01-01
This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.
Hakoyama, Tsuneo; Yokoyama, Tadashi; Kouchi, Hiroshi; Tsuchiya, Ken-ichi; Kaku, Hisatoshi; Arima, Yasuhiro
2002-11-01
Genes responding to Nod factors were picked up by the application of a differential display method for soybean suspension-cultured cells. Forty-five cDNA fragments derived from such genes were detected. Seven fragments (ssc1-ssc7) were successfully cloned. The putative product of genes corresponding to ssc1 was estimated to be a disease-resistance protein relating to the induction of the plant defense response against pathogens, and that corresponding to ssc7 was a sucrose transporter. Amino acid sequences deduced from full-length cDNA corresponding to ssc2 and ssc4 were investigated, and it was shown that these polypeptides were equipped with a leucine zipper motif and with phosphorylation sites that were targeted by tyrosin kinase and cAMP-dependent protein kinase, respectively. In a differential display experiment, the transcriptional levels of three genes corresponding to ssc2, ssc3 and ssc5 were estimated to be up-regulated at 6 h after initiation of the treatment and the remaining four were estimated to be down-regulated. However, transcription of the genes corresponding to all ssc was clearly repressed within 2 h after initiation of the treatment. Five of them were restored to their transcriptional level 6 h after initiation of the treatment, although the others were repressed throughout the experimental period.
Respondent-Driven Sampling: An Assessment of Current Methodology.
Gile, Krista J; Handcock, Mark S
2010-08-01
Respondent-Driven Sampling (RDS) employs a variant of a link-tracing network sampling strategy to collect data from hard-to-reach populations. By tracing the links in the underlying social network, the process exploits the social structure to expand the sample and reduce its dependence on the initial (convenience) sample.The current estimators of population averages make strong assumptions in order to treat the data as a probability sample. We evaluate three critical sensitivities of the estimators: to bias induced by the initial sample, to uncontrollable features of respondent behavior, and to the without-replacement structure of sampling.Our analysis indicates: (1) that the convenience sample of seeds can induce bias, and the number of sample waves typically used in RDS is likely insufficient for the type of nodal mixing required to obtain the reputed asymptotic unbiasedness; (2) that preferential referral behavior by respondents leads to bias; (3) that when a substantial fraction of the target population is sampled the current estimators can have substantial bias.This paper sounds a cautionary note for the users of RDS. While current RDS methodology is powerful and clever, the favorable statistical properties claimed for the current estimates are shown to be heavily dependent on often unrealistic assumptions. We recommend ways to improve the methodology.
Shock initiation and detonation properties of bisfluorodinitroethyl formal (FEFO)
NASA Astrophysics Data System (ADS)
Gibson, L. L.; Sheffield, S. A.; Dattelbaum, Dana M.; Stahl, David B.
2012-03-01
FEFO is a liquid explosive with a density of 1.60 g/cm3 and an energy output similar to that of trinitrotoluene (TNT), making it one of the more energetic liquid explosives. Here we describe shock initiation experiments that were conducted using a two-stage gas gun using magnetic gauges to measure the wave profiles during a shock-to-detonation transition. Unreacted Hugoniot data, time-to detonation (overtake) measurements, and reactive wave profiles were obtained from each experiment. FEFO was found to initiate by the homogeneous initiation model, similar to all other liquid explosives we have studied (nitromethane, isopropyl nitrate, hydrogen peroxide). The new unreacted Hugoniot points agree well with other published data. A universal liquid Hugoniot estimation slightly under predicts the measured Hugoniot data. FEFO is very insensitive, with about the same shock sensitivity as the triamino-trinitro-benzene (TATB)-based explosive PBX9502 and cast TNT.
Simple estimation of linear 1+1 D tsunami run-up
NASA Astrophysics Data System (ADS)
Fuentes, M.; Campos, J. A.; Riquelme, S.
2016-12-01
An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.
NASA Astrophysics Data System (ADS)
Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery
2017-09-01
In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.
Configurational entropy as a lifetime predictor and pattern discriminator for oscillons
NASA Astrophysics Data System (ADS)
Gleiser, Marcelo; Stephens, Michelle; Sowinski, Damian
2018-05-01
Oscillons are long-lived, spherically symmetric, attractor scalar field configurations that emerge as certain field configurations evolve in time. It has been known for many years that there is a direct correlation between the initial configuration's shape and the resulting oscillon lifetime: a shape memory. In this paper, we use an information-entropic measure of spatial complexity known as differential configurational entropy (DCE) to obtain estimates of oscillon lifetimes in scalar field theories with symmetric and asymmetric double-well potentials. The time-dependent DCE is built from the Fourier transform of the two-point correlation function of the energy density of the scalar field configuration. We obtain a scaling law correlating oscillon lifetimes and measures obtained from its evolving DCE. For the symmetric double well, for example, we show that we can apply DCE to predict an oscillon's lifetime with an average accuracy of 6% or better. We also show that the DCE acts as a pattern discriminator, able to distinguish initial configurations that evolve into long-lived oscillons from other nonperturbative short-lived fluctuations.
Microprocessor utilization in search and rescue missions
NASA Technical Reports Server (NTRS)
Schwartz, M.; Bashkow, T.
1978-01-01
The position of an emergency transmitter may be determined by measuring the Doppler shift of the distress signal as received by an orbiting satellite. This requires the computation of an initial estimate and refinement of this estimate through an iterative, nonlinear, least squares estimation. A version of the algorithm was implemented and tested by locating a transmitter on the premises and obtaining observations from a satellite. The computer used was an IBM 360/95. The position was determined within the desired 10 km radius accuracy. The feasibility of performing the same task in real time using microprocessor technology, was determined. The least squares algorithm was implemented on an Intel 8080 microprocessor. The results indicate that a microprocessor can easily match the IBM implementation in accuracy and be performed inside the time limitations set.
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
NASA Astrophysics Data System (ADS)
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.
2009-01-01
Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.
Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey
2015-12-01
The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.
NASA Technical Reports Server (NTRS)
1978-01-01
The author has identified the following significant results. The initial CAS estimates, which were made for each month from April through August, were considerably higher than the USDA/SRS estimates. This was attributed to: (1) the practice of considering bare ground as potential wheat and counting it as wheat; (2) overestimation of the wheat proportions in segments having only a small amount of wheat; and (3) the classification of confusion crops as wheat. At the end of the season most of the segments were reworked using improved methods based on experience gained during the season. In particular, new procedures were developed to solve the three problems listed above. These and other improvements used in the rework experiment resulted in at-harvest estimates that were much closer to the USDA/SRS estimates than those obtained during the regular season.
Robot acting on moving bodies (RAMBO): Preliminary results
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David
1989-01-01
A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
Kim, Hyun Jung; Griffiths, Mansel W; Fazil, Aamir M; Lammerding, Anna M
2009-09-01
Foodborne illness contracted at food service operations is an important public health issue in Korea. In this study, the probabilities for growth of, and enterotoxin production by, Staphylococcus aureus in pork meat-based foods prepared in food service operations were estimated by the Monte Carlo simulation. Data on the prevalence and concentration of S. aureus as well as compliance to guidelines for time and temperature controls during food service operations were collected. The growth of S. aureus was initially estimated by using the U.S. Department of Agriculture's Pathogen Modeling Program. A second model based on raw pork meat was derived to compare cell number predictions. The correlation between toxin level and cell number as well as minimum toxin dose obtained from published data was adopted to quantify the probability of staphylococcal intoxication. When data gaps were found, assumptions were made based on guidelines for food service practices. Baseline risk model and scenario analyses were performed to indicate possible outcomes of staphylococcal intoxication under the scenarios generated based on these data gaps. Staphylococcal growth was predicted during holding before and after cooking, and the highest estimated concentration (4.59 log CFU/g for the 99.9th percentile value) of S. aureus was observed in raw pork initially contaminated with S. aureus and held before cooking. The estimated probability for staphylococcal intoxication was very low, using currently available data. However, scenario analyses revealed an increased possibility of staphylococcal intoxication when increased levels of initial contamination in the raw meat, andlonger holding time both before and after cooking the meat occurred.
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Yamashina, T.; Inoue, H.; Toda, S.
2007-12-01
On March 6, 2007, an earthquake doublet occurred around Lake Singkarak, central Sumatra in Indonesia. An earthquake with magnitude (Mw) 6.4 at 03:49 is followed two hours later (05:49) by a similar-size event (Mw 6.3). Lake Singkarak is located between the Sianok and Sumani fault segments of the Sumatran fault system, and is a pull-apart basin formed at the segment boundary. We investigate source processes of the earthquakes using waveform data obtained from JISNET, which is a broad-band seismograph network in Indonesia. We first estimate the centroid source locations and focal mechanisms by the waveform inversion carried out in the frequency domain. Since stations are distributed almost linearly in the NW-SE direction coincident with the Sumatran fault strike direction, the estimated centroid locations are not well resolved especially in the direction orthogonal to the NW-SE direction. If we assume that these earthquakes occurred along the Sumatran fault, the first earthquake is located on the Sumani segment below Lake Singkarak and the second event is located at a few tens of kilometers north of the first event on the Sianok segment. The focal mechanisms of both events point to almost identical right-lateral strike-slip vertical faulting, which is consistent with the geometry of the Sumatran fault system. We next investigate the rupture initiation points using the particle motions of the P-waves of these earthquakes observed at station PPI, which is located about 20 km north of the Lake Singkarak. The initiation point of the first event is estimated in the north of the lake, which corresponds to the northern end of the Sumani segment. The initiation point of the second event is estimated at the southern end of the Sianok segment. The observed maximum amplitudes at stations located in the SE of the source region show larger amplitudes for the first event than those for the second one. On the other hand, the amplitudes at station BSI located in the NW of the source region show larger amplitude for the second event than that for the first one. Since the magnitudes, focal mechanisms, and source locations are almost identical for the two events, the larger amplitudes for the second event at BSI may be due to the effect of rupture directivity. Accordingly, we obtain the following image of source processes of the earthquake doublet: The first event initiated at the segment boundary and its rupture propagated along the Sumani segment to the SW direction. Then, the second event, which may be triggered by the first event, initiated at a location close to the hypocenter of the first event, but its rupture propagated along the Sianok segment to the NE direction, opposite to the first event. It is known that the previous significant seismic activity along the Sianok and Sumani segments occurred in 1926, which was also an earthquake doublet with similar magnitudes to those in 2007. If we assume that the time interval between the earthquake doublets in 1926 and 2007 represents the average recurrence interval and that typical slip in the individual earthquakes is 1 m, we obtain approximately 1 cm/year for a slip rate of the fault segments. Geological features indicate that Lake Singkrak is no more than a few million years old (Sieh and Natawidjaja, 2000, JGR). If the pull-apart basin has been created since a few million years ago with the estimated slip rate of the segments, we obtain roughly 20 km of the total offset on the Sianok and Sumani segments, which is consistent with the observed offset. Our study supports the model of Sieh and Natawidjaja (2000) that the basin continues to be created by dextral slip on the en echelon Sumani and Sianok segments.
Kårstad, S B; Kvello, O; Wichstrøm, L; Berg-Nielsen, T S
2014-05-01
Parents' ability to correctly perceive their child's skills has implications for how the child develops. In some studies, parents have shown to overestimate their child's abilities in areas such as IQ, memory and language. Emotion Comprehension (EC) is a skill central to children's emotion regulation, initially learned from their parents. In this cross-sectional study we first tested children's EC and then asked parents to estimate the child's performance. Thus, a measure of accuracy between child performance and parents' estimates was obtained. Subsequently, we obtained information on child and parent factors that might predict parents' accuracy in estimating their child's EC. Child EC and parental accuracy of estimation was tested by studying a community sample of 882 4-year-olds who completed the Test of Emotion Comprehension (TEC). The parents were instructed to guess their children's responses on the TEC. Predictors of parental accuracy of estimation were child actual performance on the TEC, child language comprehension, observed parent-child interaction, the education level of the parent, and child mental health. Ninety-one per cent of the parents overestimated their children's EC. On average, parents estimated that their 4-year-old children would display the level of EC corresponding to a 7-year-old. Accuracy of parental estimation was predicted by child high performance on the TEC, child advanced language comprehension, and more optimal parent-child interaction. Parents' ability to estimate the level of their child's EC was characterized by a substantial overestimation. The more competent the child, and the more sensitive and structuring the parent was interacting with the child, the more accurate the parent was in the estimation of their child's EC. © 2013 John Wiley & Sons Ltd.
Runoff simulation sensitivity to remotely sensed initial soil water content
NASA Astrophysics Data System (ADS)
Goodrich, D. C.; Schmugge, T. J.; Jackson, T. J.; Unkrich, C. L.; Keefer, T. O.; Parry, R.; Bach, L. B.; Amer, S. A.
1994-05-01
A variety of aircraft remotely sensed and conventional ground-based measurements of volumetric soil water content (SW) were made over two subwatersheds (4.4 and 631 ha) of the U.S. Department of Agriculture's Agricultural Research Service Walnut Gulch experimental watershed during the 1990 monsoon season. Spatially distributed soil water contents estimated remotely from the NASA push broom microwave radiometer (PBMR), an Institute of Radioengineering and Electronics (IRE) multifrequency radiometer, and three ground-based point methods were used to define prestorm initial SW for a distributed rainfall-runoff model (KINEROS; Woolhiser et al., 1990) at a small catchment scale (4.4 ha). At a medium catchment scale (631 ha or 6.31 km2) spatially distributed PBMR SW data were aggregated via stream order reduction. The impacts of the various spatial averages of SW on runoff simulations are discussed and are compared to runoff simulations using SW estimates derived from a simple daily water balance model. It was found that at the small catchment scale the SW data obtained from any of the measurement methods could be used to obtain reasonable runoff predictions. At the medium catchment scale, a basin-wide remotely sensed average of initial water content was sufficient for runoff simulations. This has important implications for the possible use of satellite-based microwave soil moisture data to define prestorm SW because the low spatial resolutions of such sensors may not seriously impact runoff simulations under the conditions examined. However, at both the small and medium basin scale, adequate resources must be devoted to proper definition of the input rainfall to achieve reasonable runoff simulations.
Determination of Eros Physical Parameters for Near Earth Asteroid Rendezvous Orbit Phase Navigation
NASA Technical Reports Server (NTRS)
Miller, J. K.; Antreasian, P. J.; Georgini, J.; Owen, W. M.; Williams, B. G.; Yeomans, D. K.
1995-01-01
Navigation of the orbit phase of the Near Earth steroid Rendezvous (NEAR) mission will re,quire determination of certain physical parameters describing the size, shape, gravity field, attitude and inertial properties of Eros. Prior to launch, little was known about Eros except for its orbit which could be determined with high precision from ground based telescope observations. Radar bounce and light curve data provided a rough estimate of Eros shape and a fairly good estimate of the pole, prime meridian and spin rate. However, the determination of the NEAR spacecraft orbit requires a high precision model of Eros's physical parameters and the ground based data provides only marginal a priori information. Eros is the principal source of perturbations of the spacecraft's trajectory and the principal source of data for determining the orbit. The initial orbit determination strategy is therefore concerned with developing a precise model of Eros. The original plan for Eros orbital operations was to execute a series of rendezvous burns beginning on December 20,1998 and insert into a close Eros orbit in January 1999. As a result of an unplanned termination of the rendezvous burn on December 20, 1998, the NEAR spacecraft continued on its high velocity approach trajectory and passed within 3900 km of Eros on December 23, 1998. The planned rendezvous burn was delayed until January 3, 1999 which resulted in the spacecraft being placed on a trajectory that slowly returns to Eros with a subsequent delay of close Eros orbital operations until February 2001. The flyby of Eros provided a brief glimpse and allowed for a crude estimate of the pole, prime meridian and mass of Eros. More importantly for navigation, orbit determination software was executed in the landmark tracking mode to determine the spacecraft orbit and a preliminary shape and landmark data base has been obtained. The flyby also provided an opportunity to test orbit determination operational procedures that will be used in February of 2001. The initial attitude and spin rate of Eros, as well as estimates of reference landmark locations, are obtained from images of the asteroid. These initial estimates are used as a priori values for a more precise refinement of these parameters by the orbit determination software which combines optical measurements with Doppler tracking data to obtain solutions for the required parameters. As the spacecraft is maneuvered; closer to the asteroid, estimates of spacecraft state, asteroid attitude, solar pressure, landmark locations and Eros physical parameters including mass, moments of inertia and gravity harmonics are determined with increasing precision. The determination of the elements of the inertia tensor of the asteroid is critical to spacecraft orbit determination and prediction of the asteroid attitude. The moments of inertia about the principal axes are also of scientific interest since they provide some insight into the internal mass distribution. Determination of the principal axes moments of inertia will depend on observing free precession in the asteroid's attitude dynamics. Gravity harmonics are in themselves of interest to science. When compared with the asteroid shape, some insight may be obtained into Eros' internal structure. The location of the center of mass derived from the first degree harmonic coefficients give a direct indication of overall mass distribution. The second degree harmonic coefficients relate to the radial distribution of mass. Higher degree harmonics may be compared with surface features to gain additional insight into mass distribution. In this paper, estimates of Eros physical parameters obtained from the December 23,1998 flyby will be presented. This new knowledge will be applied to simplification of Eros orbital operations in February of 2001. The resulting revision to the orbit determination strategy will also be discussed.
Fischer, Marc L.; Parazoo, Nicholas; Brophy, Kieran; ...
2017-03-09
Here, we report simulation experiments estimating the uncertainties in California regional fossil fuel and biosphere CO 2 exchanges that might be obtained by using an atmospheric inverse modeling system driven by the combination of ground-based observations of radiocarbon and total CO 2, together with column-mean CO 2 observations from NASA's Orbiting Carbon Observatory (OCO-2). The work includes an initial examination of statistical uncertainties in prior models for CO 2 exchange, in radiocarbon-based fossil fuel CO 2 measurements, in OCO-2 measurements, and in a regional atmospheric transport modeling system. Using these nominal assumptions for measurement and model uncertainties, we find thatmore » flask measurements of radiocarbon and total CO 2 at 10 towers can be used to distinguish between different fossil fuel emission data products for major urban regions of California. We then show that the combination of flask and OCO-2 observations yields posterior uncertainties in monthly-mean fossil fuel emissions of ~5–10%, levels likely useful for policy relevant evaluation of bottom-up fossil fuel emission estimates. Similarly, we find that inversions yield uncertainties in monthly biosphere CO 2 exchange of ~6%–12%, depending on season, providing useful information on net carbon uptake in California's forests and agricultural lands. Finally, initial sensitivity analysis suggests that obtaining the above results requires control of systematic biases below approximately 0.5 ppm, placing requirements on accuracy of the atmospheric measurements, background subtraction, and atmospheric transport modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, Marc L.; Parazoo, Nicholas; Brophy, Kieran
Here, we report simulation experiments estimating the uncertainties in California regional fossil fuel and biosphere CO 2 exchanges that might be obtained by using an atmospheric inverse modeling system driven by the combination of ground-based observations of radiocarbon and total CO 2, together with column-mean CO 2 observations from NASA's Orbiting Carbon Observatory (OCO-2). The work includes an initial examination of statistical uncertainties in prior models for CO 2 exchange, in radiocarbon-based fossil fuel CO 2 measurements, in OCO-2 measurements, and in a regional atmospheric transport modeling system. Using these nominal assumptions for measurement and model uncertainties, we find thatmore » flask measurements of radiocarbon and total CO 2 at 10 towers can be used to distinguish between different fossil fuel emission data products for major urban regions of California. We then show that the combination of flask and OCO-2 observations yields posterior uncertainties in monthly-mean fossil fuel emissions of ~5–10%, levels likely useful for policy relevant evaluation of bottom-up fossil fuel emission estimates. Similarly, we find that inversions yield uncertainties in monthly biosphere CO 2 exchange of ~6%–12%, depending on season, providing useful information on net carbon uptake in California's forests and agricultural lands. Finally, initial sensitivity analysis suggests that obtaining the above results requires control of systematic biases below approximately 0.5 ppm, placing requirements on accuracy of the atmospheric measurements, background subtraction, and atmospheric transport modeling.« less
NASA Astrophysics Data System (ADS)
Richards, D. A.; Nita, D. C.; Moseley, G. E.; Hoffmann, D. L.; Standish, C. D.; Smart, P. L.; Edwards, R.
2013-12-01
In addition to the many U-Th dated speleothem records (δ18O δ13C, trace elements) of past environmental change based on continuous phases of calcite growth, discontinuous records also provide important constraints for a wide range of past states of the Earth system, including sea levels, permafrost extent, regional aridity and local cave flooding. Chronological information about human activity or faunal evolution can also be obtained where calcite can be seen to overlie cave art or mammalian bones, for example. Among the important considerations when determining the U-Th age of calcite that nucleates on an exposed surface are (1) initial 230Th/232Th, which can be elevated and variable in some settings, and (2) growth rate and sub-sample density, where extrapolation is required. By way of example, we present sea level data based on U-Th ages of vadose speleothems (i.e. formed above the water table and distinct from 'phreatic' examples) from caves of the circum-Caribbean , where calcite growth was interrupted by rising sea levels and then reinitiated after regression. These estimates demand large corrections and derived sea level constraints are compared with alternative data from coral reef terraces, phreatic overgrowths on speleothems or indirect, proxy evidence from oxygen isotopes to constrain rates of ice volume growth. Flowstones from the Bahamas provide useful sea level constraints because they present the longest and most continuous records in such settings (a function of preservation potential in addition to hydrological routing) and also earliest growth post-emergence after sea level fall. We revisit estimates for sea level regression at the end of MIS 5 at ~ 80 ka (Richards et al, 1994; Lundberg and Ford, 1994) and make corrections for non-Bulk Earth initial Th contamination (230Th/232Th activity ratio > 10), based on isochron analysis of alternative stalagmites from the same settings and recent high resolution analysis. We also present new U-Th ages for contiguous layers sub-sampled from the first 2-3 mm of flowstone growth after the MIS 5 hiatus, using a sub-sample milling strategy that matches spatial resolution with maximum achievable precision (ThermoFinnigan Neptune MC-ICPMS methodology; 20-30 mg calcite, U = ~ 300 ng.g-1, 2σ age uncertainty is × 600 a at ~80 ka). Isochron methods are used to estimate the range of initial 230Th/232Th ratio and are compared with elevated values obtained from stalagmites from the same cave (Beck et al, 2001; Hoffmann et al, 2010). A similar strategy is presented for a stalagmite with much faster axial growth data, and the data are combined with additional sea level information from the same region to estimate the rate and uncertainty of sea level regression at the MIS stage 5/4 boundary. Elevated initial 230Th/232Th values have also been observed in a stalagmite from 6 m below present sea level in a cenote from the Yucatan, Mexico, where 5 phases of calcite between 10 and 5.5 ka are separated by serpulid worm tubes formed during periods of submergence. The transition between each phase provides constraints on age and elevation of relative sea level, but the former is hampered by the uncertainty of the high initial 230Th/232Th correction. We consider the possible sources of elevated Th ratios: hydrogenous, colloidal and carbonate or other detrital components.
Abuja, P M; Albertini, R; Esterbauer, H
1997-06-01
Kinetic simulation can help obtain deeper insight into the molecular mechanisms of complex processes, such as lipid peroxidation (LPO) in low-density lipoprotein (LDL). We have previously set up a single-compartment model of this process, initiating with radicals generated externally at a constant rate to show the interplay of radical scavenging and chain propagation. Here we focus on the initiating events, substituting constant rate of initiation (Ri) by redox cycling of Cu2+ and Cu+. Our simulation reveals that early events in copper-mediated LDL oxidation include (1) the reduction of Cu2+ by tocopherol (TocOH) which generates tocopheroxyl radical (TocO.), (2) the fate of TocO. which either is recycled or recombines with lipid peroxyl radical (LOO.), and (3) the reoxidation of Cu+ by lipid hydroperoxide which results in alkoxyl radical (LO.) formation. So TocO., LOO., and LO. can be regarded as primordial radicals, and the sum of their formation rates is the total rate of initiation, Ri. As experimental information of these initiating events cannot be obtained experimentally, the whole model was validated experimentally by comparison of LDL oxidation in the presence and absence of bathocuproine as predicted by simulation. Simulation predicts that Ri decreases by 2 orders of magnitude during lag time. This has important consequences for the estimation of oxidation resistance in copper-mediated LDL oxidation: after consumption of tocopherol, even small amounts of antioxidants may prolong the lag phase for a considerable time.
Heat balance statistics derived from four-dimensional assimilations with a global circulation model
NASA Technical Reports Server (NTRS)
Schubert, S. D.; Herman, G. F.
1981-01-01
The reported investigation was conducted to develop a reliable procedure for obtaining the diabatic and vertical terms required for atmospheric heat balance studies. The method developed employs a four-dimensional assimilation mode in connection with the general circulation model of NASA's Goddard Laboratory for Atmospheric Sciences. The initial analysis was conducted with data obtained in connection with the 1976 Data Systems Test. On the basis of the results of the investigation, it appears possible to use the model's observationally constrained diagnostics to provide estimates of the global distribution of virtually all of the quantities which are needed to compute the atmosphere's heat and energy balance.
Equation of state for detonation product gases
NASA Astrophysics Data System (ADS)
Nagayama, Kunihito; Kubota, Shiro
2003-03-01
A thermodynamic analysis procedure of the detonation product equation of state (EOS) together with the experimental data set of the detonation velocity as a function of initial density has been formulated. The Chapman-Jouguet (CJ) state [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley 1979)] on the p-ν plane is found to be well approximated by the envelope function formed by the collection of Rayleigh lines with many different initial density states. The Jones-Stanyukovich-Manson relation [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley, 1979)] is used to estimate the error included in this approximation. Based on this analysis, a simplified integration method to calculate the Grüneisen parameter along the CJ state curve with different initial densities utilizing the cylinder expansion data has been presented. The procedure gives a simple way of obtaining the EOS function, compatible with the detonation velocity data. Theoretical analysis has been performed for the precision of the estimated EOS function. EOS of the pentaerithrytoltetranitrate explosive is calculated and compared with some of the experimental data such as CJ pressure data and cylinder expansion data.
Automatic vasculature identification in coronary angiograms by adaptive geometrical tracking.
Xiao, Ruoxiu; Yang, Jian; Goyal, Mahima; Liu, Yue; Wang, Yongtian
2013-01-01
As the uneven distribution of contrast agents and the perspective projection principle of X-ray, the vasculatures in angiographic image are with low contrast and are generally superposed with other organic tissues; therefore, it is very difficult to identify the vasculature and quantitatively estimate the blood flow directly from angiographic images. In this paper, we propose a fully automatic algorithm named adaptive geometrical vessel tracking (AGVT) for coronary artery identification in X-ray angiograms. Initially, the ridge enhancement (RE) image is obtained utilizing multiscale Hessian information. Then, automatic initialization procedures including seed points detection, and initial directions determination are performed on the RE image. The extracted ridge points can be adjusted to the geometrical centerline points adaptively through diameter estimation. Bifurcations are identified by discriminating connecting relationship of the tracked ridge points. Finally, all the tracked centerlines are merged and smoothed by classifying the connecting components on the vascular structures. Synthetic angiographic images and clinical angiograms are used to evaluate the performance of the proposed algorithm. The proposed algorithm is compared with other two vascular tracking techniques in terms of the efficiency and accuracy, which demonstrate successful applications of the proposed segmentation and extraction scheme in vasculature identification.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Wang, Xin; Wu, Linhui; Yi, Xi; Zhang, Yanqi; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2015-01-01
Due to both the physiological and morphological differences in the vascularization between healthy and diseased tissues, pharmacokinetic diffuse fluorescence tomography (DFT) can provide contrast-enhanced and comprehensive information for tumor diagnosis and staging. In this regime, the extended Kalman filtering (EKF) based method shows numerous advantages including accurate modeling, online estimation of multiparameters, and universal applicability to any optical fluorophore. Nevertheless the performance of the conventional EKF highly hinges on the exact and inaccessible prior knowledge about the initial values. To address the above issues, an adaptive-EKF scheme is proposed based on a two-compartmental model for the enhancement, which utilizes a variable forgetting-factor to compensate the inaccuracy of the initial states and emphasize the effect of the current data. It is demonstrated using two-dimensional simulative investigations on a circular domain that the proposed adaptive-EKF can obtain preferable estimation of the pharmacokinetic-rates to the conventional-EKF and the enhanced-EKF in terms of quantitativeness, noise robustness, and initialization independence. Further three-dimensional numerical experiments on a digital mouse model validate the efficacy of the method as applied in realistic biological systems.
Blood flow estimation in gastroscopic true-color images
NASA Astrophysics Data System (ADS)
Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans
1995-05-01
The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.
Helin-Salmivaara, Arja; Lavikainen, Piia; Aarnio, Emma; Huupponen, Risto; Korhonen, Maarit Jaana
2014-01-01
Sequential cohort design (SCD) applying matching for propensity scores (PS) in accrual periods has been proposed to mitigate bias caused by channeling when calendar time is a proxy for strong confounders. We studied the channeling of patients according to atorvastatin and simvastatin initiation in Finland, starting from the market introduction of atorvastatin in 1998, and explored the SCD PS approach to analyzing the comparative effectiveness of atorvastatin versus simvastatin in the prevention of cardiovascular events (CVE). Initiators of atorvastatin or simvastatin use in the 45-75-year age range in 1998-2006 were characterized by their propensity of receiving atorvastatin over simvastatin, as estimated for 17 six-month periods. Atorvastatin (10 mg) and simvastatin (20 mg) initiators were matched 1∶1 on the PS, as estimated for the whole cohort and within each period. Cox regression models were fitted conventionally, and also for the PS matched cohort and the periodically PS matched cohort, to estimate the hazard ratios (HR) for CVEs. Atorvastatin (10 mg) was associated with a 11%-12% lower incidence of CVE in comparison with simvastatin (20 mg). The HR estimates were the same for a conventional Cox model (0.88, 95% confidence interval 0.85-0.91), for the analysis in which the PS was used to match across all periods and the Cox model was adjusted for strong confounders (0.89, 0.85-0.92), and for the analysis in which PS matching was applied within sequential periods (0.88, 0.84-0.92). The HR from a traditional PS matched analysis was 0.80 (0.77-0.83). The SCD PS approach produced effect estimates similar to those obtained in matching for PS within the whole cohort and adjusting the outcome model for strong confounders, but at the cost of efficiency. A traditional PS matched analysis without further adjustment in the outcome model produced estimates further away from unity.
Millikan, Amy M; Weber, Natalya S; Niebuhr, David W; Torrey, E Fuller; Cowan, David N; Li, Yuanzhang; Kaminski, Brenda
2007-10-01
We are studying associations between selected biomarkers and schizophrenia or bipolar disorder among military personnel. To assess potential diagnostic misclassification and to estimate the date of illness onset, we reviewed medical records for a subset of cases. Two psychiatrists independently reviewed 182 service medical records retrieved from the Department of Veterans Affairs. Data were evaluated for diagnostic concordance between database diagnoses and reviewers. Interreviewer variability was measured by using proportion of agreement and the kappa statistic. Data were abstracted to estimate date of onset. High levels of agreement existed between database diagnoses and reviewers (proportion, 94.7%; kappa = 0.88) and between reviewers (proportion, 92.3%; kappa = 0.87). The median time between illness onset and initiation of medical discharge was 1.6 and 1.1 years for schizophrenia and bipolar disorder, respectively. High levels of agreement between investigators and database diagnoses indicate that diagnostic misclassification is unlikely. Discharge procedure initiation date provides a suitable surrogate for disease onset.
Lunar PMAD technology assessment
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
1992-01-01
This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.
Schmid, Thomas; Bogdan, Martin; Günzel, Dorothee
2013-01-01
Quantifying changes in partial resistances of epithelial barriers in vitro is a challenging and time-consuming task in physiology and pathophysiology. Here, we demonstrate that electrical properties of epithelial barriers can be estimated reliably by combining impedance spectroscopy measurements, mathematical modeling and machine learning algorithms. Conventional impedance spectroscopy is often used to estimate epithelial capacitance as well as epithelial and subepithelial resistance. Based on this, the more refined two-path impedance spectroscopy makes it possible to further distinguish transcellular and paracellular resistances. In a next step, transcellular properties may be further divided into their apical and basolateral components. The accuracy of these derived values, however, strongly depends on the accuracy of the initial estimates. To obtain adequate accuracy in estimating subepithelial and epithelial resistance, artificial neural networks were trained to estimate these parameters from model impedance spectra. Spectra that reflect behavior of either HT-29/B6 or IPEC-J2 cells as well as the data scatter intrinsic to the used experimental setup were created computationally. To prove the proposed approach, reliability of the estimations was assessed with both modeled and measured impedance spectra. Transcellular and paracellular resistances obtained by such neural network-enhanced two-path impedance spectroscopy are shown to be sufficiently reliable to derive the underlying apical and basolateral resistances and capacitances. As an exemplary perturbation of pathophysiological importance, the effect of forskolin on the apical resistance of HT-29/B6 cells was quantified.
Isotherm, kinetic, and thermodynamic study of ciprofloxacin sorption on sediments.
Mutavdžić Pavlović, Dragana; Ćurković, Lidija; Grčić, Ivana; Šimić, Iva; Župan, Josip
2017-04-01
In this study, equilibrium isotherms, kinetics and thermodynamics of ciprofloxacin on seven sediments in a batch sorption process were examined. The effects of contact time, initial ciprofloxacin concentration, temperature and ionic strength on the sorption process were studied. The K d parameter from linear sorption model was determined by linear regression analysis, while the Freundlich and Dubinin-Radushkevich (D-R) sorption models were applied to describe the equilibrium isotherms by linear and nonlinear methods. The estimated K d values varied from 171 to 37,347 mL/g. The obtained values of E (free energy estimated from D-R isotherm model) were between 3.51 and 8.64 kJ/mol, which indicated a physical nature of ciprofloxacin sorption on studied sediments. According to obtained n values as measure of intensity of sorption estimate from Freundlich isotherm model (from 0.69 to 1.442), ciprofloxacin sorption on sediments can be categorized from poor to moderately difficult sorption characteristics. Kinetics data were best fitted by the pseudo-second-order model (R 2 > 0.999). Thermodynamic parameters including the Gibbs free energy (ΔG°), enthalpy (ΔH°) and entropy (ΔS°) were calculated to estimate the nature of ciprofloxacin sorption. Results suggested that sorption on sediments was a spontaneous exothermic process.
New estimation method of neutron skyshine for a high-energy particle accelerator
NASA Astrophysics Data System (ADS)
Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook
2016-09-01
A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Rohani, S Alireza; Ghomashchi, Soroush; Agrawal, Sumit K; Ladak, Hanif M
2017-03-01
Finite-element models of the tympanic membrane are sensitive to the Young's modulus of the pars tensa. The aim of this work is to estimate the Young's modulus under a different experimental paradigm than currently used on the human tympanic membrane. These additional values could potentially be used by the auditory biomechanics community for building consensus. The Young's modulus of the human pars tensa was estimated through inverse finite-element modelling of an in-situ pressurization experiment. The experiments were performed on three specimens with a custom-built pressurization unit at a quasi-static pressure of 500 Pa. The shape of each tympanic membrane before and after pressurization was recorded using a Fourier transform profilometer. The samples were also imaged using micro-computed tomography to create sample-specific finite-element models. For each sample, the Young's modulus was then estimated by numerically optimizing its value in the finite-element model so simulated pressurized shapes matched experimental data. The estimated Young's modulus values were 2.2 MPa, 2.4 MPa and 2.0 MPa, and are similar to estimates obtained using in-situ single-point indentation testing. The estimates were obtained under the assumptions that the pars tensa is linearly elastic, uniform, isotropic with a thickness of 110 μm, and the estimates are limited to quasi-static loading. Estimates of pars tensa Young's modulus are sensitive to its thickness and inclusion of the manubrial fold. However, they do not appear to be sensitive to optimization initialization, height measurement error, pars flaccida Young's modulus, and tympanic membrane element type (shell versus solid). Copyright © 2017 Elsevier B.V. All rights reserved.
A sampling plan for riparian birds of the Lower Colorado River-Final Report
Bart, Jonathan; Dunn, Leah; Leist, Amy
2010-01-01
A sampling plan was designed for the Bureau of Reclamation for selected riparian birds occurring along the Colorado River from Lake Mead to the southerly International Boundary with Mexico. The goals of the sampling plan were to estimate long-term trends in abundance and investigate habitat relationships especially in new habitat being created by the Bureau of Reclamation. The initial objective was to design a plan for the Gila Woodpecker (Melanerpes uropygialis), Arizona Bell's Vireo (Vireo bellii arizonae), Sonoran Yellow Warbler (Dendroica petechia sonorana), Summer Tanager (Piranga rubra), Gilded Flicker (Colaptes chrysoides), and Vermilion Flycatcher (Pyrocephalus rubinus); however, too little data were obtained for the last two species. Recommendations were therefore based on results for the first four species. The study area was partitioned into plots of 7 to 23 hectares. Plot borders were drawn to place the best habitat for the focal species in the smallest number of plots so that survey efforts could be concentrated on these habitats. Double sampling was used in the survey. In this design, a large sample of plots is surveyed a single time, yielding estimates of unknown accuracy, and a subsample is surveyed intensively to obtain accurate estimates. The subsample is used to estimate detection ratios, which are then applied to the results from the extensive survey to obtain unbiased estimates of density and population size. These estimates are then used to estimate long-term trends in abundance. Four sampling plans for selecting plots were evaluated based on a simulation using data from the Breeding Bird Survey. The design with the highest power involved selecting new plots every year. Power with 80 plots surveyed per year was more than 80 percent for three of the four species. Results from the surveys were used to provide recommendations to the Bureau of Reclamation for their surveys of new habitat being created in the study area.
QCD matter thermalization at the RHIC and the LHC
NASA Astrophysics Data System (ADS)
Xu, Zhe; Cheng, Luan; El, Andrej; Gallmeister, Kai; Greiner, Carsten
2009-06-01
Employing the perturbative QCD inspired parton cascade, we investigate kinetic and chemical equilibration of the partonic matter created in central heavy ion collisions at RHIC and LHC energies. Two types of initial conditions are chosen. One is generated by the model of wounded nucleons using the PYTHIA event generator and Glauber geometry. Another is considered as a color glass condensate. We show that kinetic equilibration is almost independent of the chosen initial conditions, whereas there is a sensitive dependence for chemical equilibration. The time scale of thermalization lies between 1 and 1.5 fm/c. The final parton transverse energy obtained from BAMPS calculations is compared with the RHIC data and is estimated for the LHC energy.
Mechanism of vacuum breakdown in radio-frequency accelerating structures
NASA Astrophysics Data System (ADS)
Barengolts, S. A.; Mesyats, V. G.; Oreshkin, V. I.; Oreshkin, E. V.; Khishchenko, K. V.; Uimanov, I. V.; Tsventoukh, M. M.
2018-06-01
It has been investigated whether explosive electron emission may be the initiating mechanism of vacuum breakdown in the accelerating structures of TeV linear electron-positron colliders (Compact Linear Collider). The physical processes involved in a dc vacuum breakdown have been considered, and the relationship between the voltage applied to the diode and the time delay to breakdown has been found. Based on the results obtained, the development of a vacuum breakdown in an rf electric field has been analyzed and the main parameters responsible for the initiation of explosive electron emission have been estimated. The formation of craters on the cathode surface during explosive electron emission has been numerically simulated, and the simulation results are discussed.
Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David
1989-01-01
Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J
2016-02-01
A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.
A model for the prediction of latent errors using data obtained during the development process
NASA Technical Reports Server (NTRS)
Gaffney, J. E., Jr.; Martello, S. J.
1984-01-01
A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).
Brain-computer interface for alertness estimation and improving
NASA Astrophysics Data System (ADS)
Hramov, Alexander; Maksimenko, Vladimir; Hramova, Marina
2018-02-01
Using wavelet analysis of the signals of electrical brain activity (EEG), we study the processes of neural activity, associated with perception of visual stimuli. We demonstrate that the brain can process visual stimuli in two scenarios: (i) perception is characterized by destruction of the alpha-waves and increase in the high-frequency (beta) activity, (ii) the beta-rhythm is not well pronounced, while the alpha-wave energy remains unchanged. The special experiments show that the motivation factor initiates the first scenario, explained by the increasing alertness. Based on the obtained results we build the brain-computer interface and demonstrate how the degree of the alertness can be estimated and controlled in real experiment.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Sari, Nazmi; Rotter, Thomas; Goodridge, Donna; Harrison, Liz; Kinsman, Leigh
2017-08-03
The costs of investing in health care reform initiatives to improve quality and safety have been underreported and are often underestimated. This paper reports direct and indirect cost estimates for the initial phase of the province-wide implementation of Lean activities in Saskatchewan, Canada. In order to obtain detailed information about each type of Lean event, as well as the total number of corresponding Lean events, we used the Provincial Kaizen Promotion Office (PKPO) Kaizen database. While the indirect cost of Lean implementation has been estimated using the corresponding wage rate for the event participants, the direct cost has been estimated using the fees paid to the consultant and other relevant expenses. The total cost for implementation of Lean over two years (2012-2014), including consultants and new hires, ranged from $44 million CAD to $49.6 million CAD, depending upon the assumptions used. Consultant costs accounted for close to 50% of the total. The estimated cost of Lean events alone ranged from $16 million CAD to $19.5 million CAD, with Rapid Process Improvement Workshops requiring the highest input of resources. Recognizing the substantial financial and human investments required to undertake reforms designed to improve quality and contain cost, policy makers must carefully consider whether and how these efforts result in the desired transformations. Evaluation of the outcomes of these investments must be part of the accountability framework, even prior to implementation.
Cole, Stephen R; Lau, Bryan; Eron, Joseph J; Brookhart, M Alan; Kitahata, Mari M; Martin, Jeffrey N; Mathews, William C; Mugavero, Michael J
2015-02-15
There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Fisher information in a quantum-critical environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Zhe; Ma Jian; Lu Xiaoming
2010-08-15
We consider a process of parameter estimation in a spin-j system surrounded by a quantum-critical spin chain. Quantum Fisher information lies at the heart of the estimation task. We employ Ising spin chain in a transverse field as the environment which exhibits a quantum phase transition. Fisher information decays with time almost monotonously when the environment reaches the critical point. By choosing a fixed time or taking the time average, one can see the quantum Fisher information presents a sudden drop at the critical point. Different initial states of the environment are considered. The phenomenon that the quantum Fisher information,more » namely, the precision of estimation, changes dramatically can be used to detect the quantum criticality of the environment. We also introduce a general method to obtain the maximal Fisher information for a given state.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prince, K.R.; Schneider, B.J.
This study obtained estimates of the hydraulic properties of the upper glacial and Magothy aquifers in the East Meadow area for use in analyzing the movement of reclaimed waste water through the aquifer system. This report presents drawdown and recovery data form the two aquifer tests of 1978 and 1985, describes the six methods of analysis used, and summarizes the results of the analyses in tables and graphs. The drawdown and recovery data were analyzed through three simple analytical equations, two curve-matching techniques, and a finite-element radial-flow model. The resulting estimates of hydraulic conductivity, anisotropy, and storage characteristics were usedmore » as initial input values to the finite-element radial-flow model (Reilly, 1984). The flow model was then used to refine the estimates of the aquifer properties by more accurately representing the aquifer geometry and field conditions of the pumping tests.« less
Characterization of classical static noise via qubit as probe
NASA Astrophysics Data System (ADS)
Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif
2018-03-01
The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.
Heli/SITAN: A Terrain Referenced Navigation algorithm for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollowell, J.
1990-01-01
Heli/SITAN is a Terrain Referenced Navigation (TRN) algorithm that utilizes radar altimeter ground clearance measurements in combination with a conventional navigation system and a stored digital terrain elevation map to accurately estimate a helicopter's position. Multiple Model Adaptive Estimation (MMAE) techniques are employed using a bank of single state Kalman filters to ensure that reliable position estimates are obtained even in the face of large initial position errors. A real-time implementation of the algorithm was tested aboard a US Army UH-1 helicopter equipped with a Singer-Kearfott Doppler Velocity Sensor (DVS) and a Litton LR-80 strapdown Attitude and Heading Reference Systemmore » (AHRS). The median radial error of the position fixes provided in real-time by this implementation was less than 50 m for a variety of mission profiles. 6 refs., 7 figs.« less
An Improved Aerial Target Localization Method with a Single Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2017-01-01
This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956
Single-shot quantum state estimation via a continuous measurement in the strong backaction regime
NASA Astrophysics Data System (ADS)
Cook, Robert L.; Riofrío, Carlos A.; Deutsch, Ivan H.
2014-09-01
We study quantum tomography based on a stochastic continuous-time measurement record obtained from a probe field collectively interacting with an ensemble of identically prepared systems. In comparison to previous studies, we consider here the case in which the measurement-induced backaction has a non-negligible effect on the dynamical evolution of the ensemble. We formulate a maximum likelihood estimate for the initial quantum state given only a single instance of the continuous diffusive measurement record. We apply our estimator to the simplest problem: state tomography of a single pure qubit, which, during the course of the measurement, is also subjected to dynamical control. We identify a regime where the many-body system is well approximated at all times by a separable pure spin coherent state, whose Bloch vector undergoes a conditional stochastic evolution. We simulate the results of our estimator and show that we can achieve close to the upper bound of fidelity set by the optimal generalized measurement. This estimate is compared to, and significantly outperforms, an equivalent estimator that ignores measurement backaction.
Astrom, Raven L; Wadsworth, Sally J; DeFries, John C
2007-06-01
Results obtained from previous longitudinal studies of reading difficulties indicate that reading deficits are generally stable. However, little is known about the etiology of this stability. Thus, the primary objective of this first longitudinal twin study of reading difficulties is to provide an initial assessment of genetic and environmental influences on the stability of reading deficits. Data were analyzed from a sample of 56 twin pairs, 18 identical (monozygotic, MZ) and 38 fraternal (dizygotic, DZ), in which at least one member of each pair was classified as reading-disabled in the Colorado Learning Disabilities Research Center, and on whom follow-up data were available. The twins were tested at two time points (average age of 10.3 years at initial assessment and 16.1 years at follow-up). A composite measure of reading performance (PIAT Reading Recognition, Reading Comprehension and Spelling) was highly stable, with a stability correlation of .84. Data from the initial time point were first subjected to univariate DeFries-Fulker multiple regression analysis and the resulting estimate of the heritability of the group deficit (h2g) was .84 (+/-.26). When the initial and follow-up data were then fitted to a bivariate extension of the basic DF model, bivariate heritability was estimated at .65, indicating that common genetic influences account for approximately 75% of the stability between reading measures at the two time points.
A Study of Flexible Composites for Expandable Space Structures
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
2016-01-01
Payload volume for launch vehicles is a critical constraint that impacts spacecraft design. Deployment mechanisms, such as those used for solar arrays and antennas, are approaches that have successfully accommodated this constraint, however, providing pressurized volumes that can be packaged compactly at launch and expanded in space is still a challenge. One approach that has been under development for many years is to utilize softgoods - woven fabric for straps, cloth, and with appropriate coatings, bladders - to provide this expandable pressure vessel capability. The mechanics of woven structure is complicated by a response that is nonlinear and often nonrepeatable due to the discrete nature of the woven fiber architecture. This complexity reduces engineering confidence to reliably design and certify these structures, which increases costs due to increased requirements for system testing. The present study explores flexible composite materials systems as an alternative to the heritage softgoods approach. Materials were obtained from vendors who utilize flexible composites for non-aerospace products to determine some initial physical and mechanical properties of the materials. Uniaxial mechanical testing was performed to obtain the stress-strain response of the flexible composites and the failure behavior. A failure criterion was developed from the data, and a space habitat application was used to provide an estimate of the relative performance of flexible composites compared to the heritage softgoods approach. Initial results are promising with a 25% mass savings estimated for the flexible composite solution.
Simultaneous head tissue conductivity and EEG source location estimation.
Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott
2016-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.
Simultaneous head tissue conductivity and EEG source location estimation
Acar, Can E.; Makeig, Scott
2015-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675
Wind-influenced projectile motion
NASA Astrophysics Data System (ADS)
Bernardo, Reginald Christian; Perico Esguerra, Jose; Day Vallejos, Jazmine; Jerard Canda, Jeff
2015-03-01
We solved the wind-influenced projectile motion problem with the same initial and final heights and obtained exact analytical expressions for the shape of the trajectory, range, maximum height, time of flight, time of ascent, and time of descent with the help of the Lambert W function. It turns out that the range and maximum horizontal displacement are not always equal. When launched at a critical angle, the projectile will return to its starting position. It turns out that a launch angle of 90° maximizes the time of flight, time of ascent, time of descent, and maximum height and that the launch angle corresponding to maximum range can be obtained by solving a transcendental equation. Finally, we expressed in a parametric equation the locus of points corresponding to maximum heights for projectiles launched from the ground with the same initial speed in all directions. We used the results to estimate how much a moderate wind can modify a golf ball’s range and suggested other possible applications.
More realistic power estimation for new user, active comparator studies: an empirical example.
Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til
2016-04-01
Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.
Melting in Superheated Silicon Films Under Pulsed-Laser Irradiation
NASA Astrophysics Data System (ADS)
Wang, Jin Jimmy
This thesis examines melting in superheated silicon films in contact with SiO2 under pulsed laser irradiation. An excimer-laser pulse was employed to induce heating of the film by irradiating the film through the transparent fused-quartz substrate such that most of the beam energy was deposited near the bottom Si-SiO2 interface. Melting dynamics were probed via in situ transient reflectance measurements. The temperature profile was estimated computationally by incorporating temperature- and phase-dependent physical parameters and the time-dependent intensity profile of the incident excimer-laser beam obtained from the experiments. The results indicate that a significant degree of superheating occurred in the subsurface region of the film. Surface-initiated melting was observed in spite of the internal heating scheme, which resulted in the film being substantially hotter at and near the bottom Si-SiO2 interface. By considering that the surface melts at the equilibrium melting point, the solid-phase-only heat-flow analysis estimates that the bottom Si-SiO2 interface can be superheated by at least 220 K during excimer-laser irradiation. It was found that at higher laser fluences (i.e., at higher temperatures), melting can be triggered internally. At heating rates of 1010 K/s, melting was observed to initiate at or near the (100)-oriented Si-SiO2 interface at temperatures estimated to be over 300 K above the equilibrium melting point. Based on theoretical considerations, it was deduced that melting in the superheated solid initiated via a nucleation and growth process. Nucleation rates were estimated from the experimental data using Johnson-Mehl-Avrami-Kolmogorov (JMAK) analysis. Interpretation of the results using classical nucleation theory suggests that nucleation of the liquid phase occurred via the heterogeneous mechanism along the Si-SiO2 interface.
Treatment strategies for pelvic organ prolapse: a cost-effectiveness analysis.
Hullfish, Kathie L; Trowbridge, Elisa R; Stukenborg, George J
2011-05-01
To compare the relative cost effectiveness of treatment decision alternatives for post-hysterectomy pelvic organ prolapse (POP). A Markov decision analysis model was used to assess and compare the relative cost effectiveness of expectant management, use of a pessary, and surgery for obtaining months of quality-adjusted life over 1 year. Sensitivity analysis was conducted to determine whether the results depended on specific estimates of patient utilities for pessary use, probabilities for complications and other events, and estimated costs. Only two treatment alternatives were found to be efficient choices: initial pessary use and vaginal reconstructive surgery (VRS). Pessary use (including patients that eventually transitioned to surgery) achieved 10.4 quality-adjusted months, at a cost of $10,000 per patient, while VRS obtained 11.4 quality-adjusted months, at $15,000 per patient. Sensitivity analysis demonstrated that these baseline results depended on several key estimates in the model. This analysis indicates that pessary use and VRS are the most cost-effective treatment alternatives for treating post-hysterectomy vaginal prolapse. Additional research is needed to standardize POP outcomes and complications, so that healthcare providers can best utilize cost information in balancing the risks and benefits of their treatment decisions.
On A Problem Of Propagation Of Shock Waves Generated By Explosive Volcanic Eruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusev, V. A.; Sobissevitch, A. L.
2008-06-24
Interdisciplinary study of flows of matter and energy in geospheres has become one of the most significant advances in Earth sciences. It is carried out by means of direct quantitative estimations based on detailed analysis of geological and geophysical observations and experimental data. The actual contribution is the interdisciplinary study of nonlinear acoustics and physical volcanology dedicated to shock wave propagation in a viscous and inhomogeneous medium. The equations governing evolution of shock waves with an arbitrary initial profile and an arbitrary cross-section of a beam are obtained. For the case of low viscous medium, the asymptotic solution meant tomore » calculate a profile of a shock wave in an arbitrary point has been derived. The analytical solution of the problem on propagation of shock pulses from atmosphere into a two-phase fluid-saturated geophysical medium is analysed. Quantitative estimations were carried out with respect to experimental results obtained in the course of real explosive volcanic eruptions.« less
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
NASA Astrophysics Data System (ADS)
Briseño, Jessica; Herrera, Graciela S.
2010-05-01
Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one of the variables is used as the prior space-time estimate for the Kalman filter, and the space-time cross-covariance matrix of h-ln K-C as the prior estimate-error covariance-matrix. The synthetic example has a modeling area of 700 x 700 square meters; a triangular mesh model with 702 nodes and 1306 elements is used. A pumping well located in the central part of the study area is considered. For the contaminant transport model, a contaminant source area is present in the western part of the study area. The estimation points for hydraulic conductivity, hydraulic head and contaminant concentrations are located on a submesh of the model mesh (same location for h, ln K and c), composed by 48 nodes spread throughout the study area, with an approximately separation of 90 meters between nodes. The results analysis was done through the mean error, root mean square error, initial and final estimation maps of h, ln K and C at each time, and the initial and final variance maps of h, ln K and C. To obtain model convergence, 3000 realizations of ln K were required using SGSim, and only 1000 with LHC. The results show that for both alternatives, the Kalman filter estimates for h, ln K and C using h and C data, have errors which magnitudes decrease as data is added. HERRERA, G. S.(1998), Cost Effective Groundwater Quality Sampling Network Design. Ph. D. thesis, University of Vermont, Burlington, Vermont, 172 pp. HERRERA G., GUARNACCIA J., PINDER G. Y SIMUTA R.(2001),"Diseño de redes de monitoreo de la calidad del agua subterránea eficientes", Proceedings of the 2001 International Symposium on Environmental Hydraulics, Arizona, U.S.A. HERRERA G. S. and PINDER G.F. (2005), Space-time optimization of groundwater quality sampling networks Water Resour. Res., Vol. 41, No. 12, W12407, 10.1029/2004WR003626.
NASA Astrophysics Data System (ADS)
Yaparova, N.
2017-10-01
We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.
NASA Astrophysics Data System (ADS)
Åberg Lindell, M.; Andersson, P.; Grape, S.; Håkansson, A.; Thulin, M.
2018-07-01
In addition to verifying operator declared parameters of spent nuclear fuel, the ability to experimentally infer such parameters with a minimum of intrusiveness is of great interest and has been long-sought after in the nuclear safeguards community. It can also be anticipated that such ability would be of interest for quality assurance in e.g. recycling facilities in future Generation IV nuclear fuel cycles. One way to obtain information regarding spent nuclear fuel is to measure various gamma-ray intensities using high-resolution gamma-ray spectroscopy. While intensities from a few isotopes obtained from such measurements have traditionally been used pairwise, the approach in this work is to simultaneously analyze correlations between all available isotopes, using multivariate analysis techniques. Based on this approach, a methodology for inferring burnup, cooling time, and initial fissile content of PWR fuels using passive gamma-ray spectroscopy data has been investigated. PWR nuclear fuels, of UOX and MOX type, and their gamma-ray emissions, were simulated using the Monte Carlo code Serpent. Data comprising relative isotope activities was analyzed with decision trees and support vector machines, for predicting fuel parameters and their associated uncertainties. From this work it may be concluded that up to a cooling time of twenty years, the 95% prediction intervals of burnup, cooling time and initial fissile content could be inferred to within approximately 7 MWd/kgHM, 8 months, and 1.4 percentage points, respectively. An attempt aiming to estimate the plutonium content in spent UOX fuel, using the developed multivariate analysis model, is also presented. The results for Pu mass estimation are promising and call for further studies.
From baking a cake to solving the diffusion equation
NASA Astrophysics Data System (ADS)
Olszewski, Edward A.
2006-06-01
We explain how modifying a cake recipe by changing either the dimensions of the cake or the amount of cake batter alters the baking time. We restrict our consideration to the génoise and obtain a semiempirical relation for the baking time as a function of oven temperature, initial temperature of the cake batter, and dimensions of the unbaked cake. The relation, which is based on the diffusion equation, has three parameters whose values are estimated from data obtained by baking cakes in cylindrical pans of various diameters. The relation takes into account the evaporation of moisture at the top surface of the cake, which is the dominant factor affecting the baking time of a cake.
Kinetic characterisation of primer mismatches in allele-specific PCR: a quantitative assessment.
Waterfall, Christy M; Eisenthal, Robert; Cobb, Benjamin D
2002-12-20
A novel method of estimating the kinetic parameters of Taq DNA polymerase during rapid cycle PCR is presented. A model was constructed using a simplified sigmoid function to represent substrate accumulation during PCR in combination with the general equation describing high substrate inhibition for Michaelis-Menten enzymes. The PCR progress curve was viewed as a series of independent reactions where initial rates were accurately measured for each cycle. Kinetic parameters were obtained for allele-specific PCR (AS-PCR) amplification to examine the effect of mismatches on amplification. A high degree of correlation was obtained providing evidence of substrate inhibition as a major cause of the plateau phase that occurs in the later cycles of PCR.
Solar radiation pressure resonances in Low Earth Orbits
NASA Astrophysics Data System (ADS)
Alessi, Elisa Maria; Schettino, Giulia; Rossi, Alessandro; Valsecchi, Giovanni B.
2018-01-01
The aim of this work is to highlight the crucial role that orbital resonances associated with solar radiation pressure can have in Low Earth Orbit. We review the corresponding literature, and provide an analytical tool to estimate the maximum eccentricity which can be achieved for well-defined initial conditions. We then compare the results obtained with the simplified model with the results obtained with a more comprehensive dynamical model. The analysis has important implications both from a theoretical point of view, because it shows that the role of some resonances was underestimated in the past, and also from a practical point of view in the perspective of passive deorbiting solutions for satellites at the end-of-life.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Saturated-unsaturated flow to a well with storage in a compressible unconfined aquifer
NASA Astrophysics Data System (ADS)
Mishra, Phoolendra Kumar; Neuman, Shlomo P.
2011-05-01
Mishra and Neuman (2010) developed an analytical solution for flow to a partially penetrating well of zero radius in a compressible unconfined aquifer that allows inferring its saturated and unsaturated hydraulic properties from responses recorded in the saturated and/or unsaturated zones. Their solution accounts for horizontal as well as vertical flows in each zone. It represents unsaturated zone constitutive properties in a manner that is at once mathematically tractable and sufficiently flexible to provide much improved fits to standard constitutive models. In this paper we extend the solution of [2010] to the case of a finite diameter pumping well with storage; investigate the effects of storage in the pumping well and delayed piezometer response on drawdowns in the saturated and unsaturated zones as functions of position and time; validate our solution against numerical simulations of drawdown in a synthetic aquifer having unsaturated properties described by the [1980]- [1976] model; use our solution to analyze 11 transducer-measured drawdown records from a seven-day pumping test conducted by University of Waterloo researchers at the Canadian Forces Base Borden in Ontario, Canada; validate our parameter estimates against manually-measured drawdown records in 14 other piezometers at Borden; and compare (a) our estimates of aquifer parameters with those obtained on the basis of all these records by [2008], (b) on the basis of 11 transducer-measured drawdown records by [2007], (c) our estimates of van Genuchten-Mualem parameters with those obtained on the basis of laboratory drainage data from the site by [1992], and (d) our corresponding prediction of how effective saturation varies with elevation above the initial water table under static conditions with a profile based on water contents measured in a neutron access tube at a radial distance of about 5 m from the center of the pumping well. We also use our solution to analyze 11 transducer-measured drawdown records from a 7 day pumping test conducted by University of Waterloo researchers at the Canadian Forces Base Borden in Ontario, Canada. We validate our parameter estimates against manually measured drawdown records in 14 other piezometers at Borden. We compare our estimates of aquifer parameters with those obtained on the basis of all these records by Moench (2008) and on the basis of 11 transducer-measured drawdown records by Endres et al. (2007), and we compare our estimates of van Genuchten-Mualem parameters with those obtained on the basis of laboratory drainage data from the site by Akindunni and Gillham (1992); finally, we compare our corresponding prediction of how effective saturation varies with elevation above the initial water table under static conditions with a profile based on water contents measured in a neutron access tube at a radial distance of about 5 m from the center of the pumping well.
Interannual variability of mass transport in the Canary region from LADCP data
NASA Astrophysics Data System (ADS)
Comas-Rodríguez, Isis; Hernández-Guerra, Alonso; Vélez-Belchí, Pedro; Fraile-Nuez, Eugenio
2010-05-01
The variability of the Canary Current is a widely studied topic regarding its role as eastern boundary of the North Atlantic Subtropical Gyre. The Canary region provides indeed an interesting study area in terms of estimating variability scales of the Subtropical Gyre as well as the water masses dynamics. RAPROCAN (RAdial PROfunda de CANarias - Canary deep hydrographic section) is a project based on the reaching of these goals through the obtaining of hydrographic measures during cruises taking place approximately along 29°N, to the North of the Canary Archipelago, twice a year since 2006. The full depth sampling carried out allows the study of temperature and salinity distribution and the calculation of mass transports across the section. The transport estimates are compared to those obtained from previous measurements and estimates in the region. Therefore, transports and their variability through the last decade are quantified. The most significant advance made to previous works is the use of LADCP (Lowered Acoustic Doppler Current Profiler) data informing the initial geostrophic calculations. Thus, corrections are applied to each geostrophic profile considering the reference velocity obtained from LADCP data. ADCP-referenced transport estimates are obtained, providing a successful comparison between the velocity fields obtained from the hydrographic measures. While this work shows the interannual variability observed in winter since 1997, preliminary results confirm previous hypotheses about the magnitude of the Canary Current. Those results including LADCP data also provide new aspects in the circulation distribution across the Canary Archipelago. Also moored current meter data were taken into account in the up close study of the Current through the Lanzarote Passage. Interesting conclusions were drawn that certify the usefulness of LADCP data in referencing geostrophic calculations, while corroborating the results obtained through this methodology. Hence, this work permits the quantification of mass fluxes across the section as well as the study of the water masses located in the Canary Basin and the further analysis of the Subtropical Gyre variability with regards to its significance in the circulation and dynamics concerning the North Atlantic Ocean.
Houston, Natalie A.; Braun, Christopher L.
2004-01-01
This report describes the collection, analyses, and distribution of hydraulic-conductivity data obtained from slug tests completed in the alluvial aquifer underlying Air Force Plant 4 and Naval Air Station-Joint Reserve Base Carswell Field, Fort Worth, Texas, during October 2002 and August 2003 and summarizes previously available hydraulic-conductivity data. The U.S. Geological Survey, in cooperation with the U.S. Air Force, completed 30 slug tests in October 2002 and August 2003 to obtain estimates of horizontal hydraulic conductivity to use as initial values in a ground-water-flow model for the site. The tests were done by placing a polyvinyl-chloride slug of known volume beneath the water level in selected wells, removing the slug, and measuring the resulting water-level recovery over time. The water levels were measured with a pressure transducer and recorded with a data logger. Hydraulic-conductivity values were estimated from an analytical relation between the instantaneous displacement of water in a well bore and the resulting rate of head change. Although nearly two-thirds of the tested wells recovered 90 percent of their slug-induced head change in less than 2 minutes, 90-percent recovery times ranged from 3 seconds to 35 minutes. The estimates of hydraulic conductivity range from 0.2 to 200 feet per day. Eighty-three percent of the estimates are between 1 and 100 feet per day.
Schmid, Thomas; Bogdan, Martin; Günzel, Dorothee
2013-01-01
Quantifying changes in partial resistances of epithelial barriers in vitro is a challenging and time-consuming task in physiology and pathophysiology. Here, we demonstrate that electrical properties of epithelial barriers can be estimated reliably by combining impedance spectroscopy measurements, mathematical modeling and machine learning algorithms. Conventional impedance spectroscopy is often used to estimate epithelial capacitance as well as epithelial and subepithelial resistance. Based on this, the more refined two-path impedance spectroscopy makes it possible to further distinguish transcellular and paracellular resistances. In a next step, transcellular properties may be further divided into their apical and basolateral components. The accuracy of these derived values, however, strongly depends on the accuracy of the initial estimates. To obtain adequate accuracy in estimating subepithelial and epithelial resistance, artificial neural networks were trained to estimate these parameters from model impedance spectra. Spectra that reflect behavior of either HT-29/B6 or IPEC-J2 cells as well as the data scatter intrinsic to the used experimental setup were created computationally. To prove the proposed approach, reliability of the estimations was assessed with both modeled and measured impedance spectra. Transcellular and paracellular resistances obtained by such neural network-enhanced two-path impedance spectroscopy are shown to be sufficiently reliable to derive the underlying apical and basolateral resistances and capacitances. As an exemplary perturbation of pathophysiological importance, the effect of forskolin on the apical resistance of HT-29/B6 cells was quantified. PMID:23840862
Thermal fatigue behaviour for a 316 L type steel
NASA Astrophysics Data System (ADS)
Fissolo, A.; Marini, B.; Nais, G.; Wident, P.
1996-10-01
This paper deals with initiation and growth of cracks produced by thermal fatigue loadings on 316 L steel, which is a reference material for the first wall of the next fusion reactor ITER. Two types of facilities have been built. As for true components, thermal cycles have been repeatedly applied on the surface of the specimen. The first is mainly concerned with initiation, which is detected with a light microscope. The second allows one to determine the propagation of a single crack. Crack initiation is analyzed using the French RCC-MR code procedure, and the strain-controlled isothermal fatigue curves. To predict crack growth, a model previously proposed by Haigh and Skelton is applied. This is based on determination of effective stress intensity factors, which takes into account both plastic strain and crack closure phenomena. It is shown that estimations obtained with such methodologies are in good agreement with experimental data.
Vision-guided gripping of a cylinder
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.
Gong, Inna Y.; Schwarz, Ute I.; Crown, Natalie; Dresser, George K.; Lazo-Langner, Alejandro; Zou, GuangYong; Roden, Dan M.; Stein, C. Michael; Rodger, Marc; Wells, Philip S.; Kim, Richard B.; Tirona, Rommel G.
2011-01-01
Variable warfarin response during treatment initiation poses a significant challenge to providing optimal anticoagulation therapy. We investigated the determinants of initial warfarin response in a cohort of 167 patients. During the first nine days of treatment with pharmacogenetics-guided dosing, S-warfarin plasma levels and international normalized ratio were obtained to serve as inputs to a pharmacokinetic-pharmacodynamic (PK-PD) model. Individual PK (S-warfarin clearance) and PD (Imax) parameter values were estimated. Regression analysis demonstrated that CYP2C9 genotype, kidney function, and gender were independent determinants of S-warfarin clearance. The values for Imax were dependent on VKORC1 and CYP4F2 genotypes, vitamin K status (as measured by plasma concentrations of proteins induced by vitamin K absence, PIVKA-II) and weight. Importantly, indication for warfarin was a major independent determinant of Imax during initiation, where PD sensitivity was greater in atrial fibrillation than venous thromboembolism. To demonstrate the utility of the global PK-PD model, we compared the predicted initial anticoagulation responses with previously established warfarin dosing algorithms. These insights and modeling approaches have application to personalized warfarin therapy. PMID:22114699
Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 2
1977-02-01
final form. For some components, the parameters are calculated from design factors (e.g., design life) that must be input when requested. Each component...Components Components are regarded as statis- tically identical if they are drawn from the same production lot because the initial and sub- sequent...table yields b 0.0023 The - factors are obtained from Tables 2.2.4-1 through 2.2.4-5: Factor Value rE Space, flight 1 JANTXV quality 0.5 7A Small signal
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Anastasio, Mark A.
2017-12-01
The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.
Aorta modeling with the element-based zero-stress state and isogeometric discretization
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi
2017-02-01
Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.
Sayed, Mohammed E; Porwal, Amit; Al-Faraj, Nida A; Bajonaid, Amal M; Sumayli, Hassan A
2017-07-01
Several techniques and methods have been proposed to estimate the anterior teeth dimensions in edentulous patients. However, this procedure remains challenging especially when preextraction records are not available. Therefore, the purpose of this study is to evaluate some of the existing extraoral and intraoral methods for estimation of anterior tooth dimensions and to propose a novel method for estimation of central incisor width (CIW) and length (CIL) for Saudi population. Extraoral and intraoral measurements were recorded for a total of 236 subjects. Descriptive statistical analysis and Pearson's correlation tests were performed. Association was evaluated between combined anterior teeth width (CATW) and interalar width (IAW), intercommisural width (ICoW) and interhamular notch distance (IHND) plus 10 mm. Evaluation of the linear relationship between central incisor length (CIL) with facial height (FH) and CIW with bizygomatic width (BZW) was also performed. Significant correlation was found between the CATW and ICoW and IAW (p-values <0.0001); however, no correlation was found relative to IHND plus 10 mm (p-value = 0.456). Further, no correlation was found between the FH and right CIL and BZW and right CIW (p-values = 0.255 and 0.822). The means of CIL, CIW, incisive papillae-fovea palatinae (IP-FP), and IHND were used to estimate the central incisor dimensions: CIL = FP-IP distance/4.45, CIW = IHND/4.49. It was concluded that the ICoW and IAW measurements are the only predictable methods to estimate the initial reference value for CATW. A proposed intraoral approach was hypothesized for estimation of CIW and CIL for the given population. Based on the results of the study, ICoW and IAW measurements can be useful in estimating the initial reference value for CATW, while the proposed novel approach using specific palatal dimensions can be used for estimating the width and length of central incisors. These methods are crucial to obtain esthetic treatment results within the parameters of the given population.
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Continuous-variable quantum probes for structured environments
NASA Astrophysics Data System (ADS)
Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.
2018-01-01
We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.
Adaptive AOA-aided TOA self-positioning for mobile wireless sensor networks.
Wen, Chih-Yu; Chan, Fu-Kai
2010-01-01
Location-awareness is crucial and becoming increasingly important to many applications in wireless sensor networks. This paper presents a network-based positioning system and outlines recent work in which we have developed an efficient principled approach to localize a mobile sensor using time of arrival (TOA) and angle of arrival (AOA) information employing multiple seeds in the line-of-sight scenario. By receiving the periodic broadcasts from the seeds, the mobile target sensors can obtain adequate observations and localize themselves automatically. The proposed positioning scheme performs location estimation in three phases: (I) AOA-aided TOA measurement, (II) Geometrical positioning with particle filter, and (III) Adaptive fuzzy control. Based on the distance measurements and the initial position estimate, adaptive fuzzy control scheme is applied to solve the localization adjustment problem. The simulations show that the proposed approach provides adaptive flexibility and robust improvement in position estimation.
Stereovision-based pose and inertia estimation of unknown and uncooperative space objects
NASA Astrophysics Data System (ADS)
Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo
2017-01-01
Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
Perquin, Magali; Diederich, Nico; Pastore, Jessica; Lair, Marie-Lise; Stranges, Saverio; Vaillant, Michel
2015-01-01
This study aimed to assess the prevalence of dementia and cognitive complaints in a cross-sectional sample of Luxembourg seniors, and to discuss the results in the societal context of high cognitive reserve resulting from multilingualism. A population sample of 1,377 people representative of Luxembourg residents aged over 64 years was initially identified via the national social insurance register. There were three different levels of contribution: full participation in the study, partial participation, and non-participation. We examined the profiles of these three different samples so that we could infer the prevalence estimates in the Luxembourgish senior population as a whole using the prevalence estimates obtained in this study. After careful attention to the potential bias and of the possibility of underestimation, we considered the obtained prevalence estimates of 3.8% for dementia (with corresponding 95% confidence limits (CL) of 2.8% and 4.8%) and 26.1% for cognitive complaints (CL = [17.8-34.3]) as trustworthy. Based on these findings, we postulate that high cognitive reserve may result in surprisingly low prevalence estimates of cognitive complaints and dementia in adults over the age of 64 years, which thereby corroborates the longer disability-free life expectancy observed in the Luxembourg population. To the best of our knowledge, this study is the first to report such Luxembourgish public health data.
Perquin, Magali; Diederich, Nico; Pastore, Jessica; Lair, Marie-Lise; Stranges, Saverio; Vaillant, Michel
2015-01-01
Objectives This study aimed to assess the prevalence of dementia and cognitive complaints in a cross-sectional sample of Luxembourg seniors, and to discuss the results in the societal context of high cognitive reserve resulting from multilingualism. Methods A population sample of 1,377 people representative of Luxembourg residents aged over 64 years was initially identified via the national social insurance register. There were three different levels of contribution: full participation in the study, partial participation, and non-participation. We examined the profiles of these three different samples so that we could infer the prevalence estimates in the Luxembourgish senior population as a whole using the prevalence estimates obtained in this study. Results After careful attention to the potential bias and of the possibility of underestimation, we considered the obtained prevalence estimates of 3.8% for dementia (with corresponding 95% confidence limits (CL) of 2.8% and 4.8%) and 26.1% for cognitive complaints (CL = [17.8–34.3]) as trustworthy. Conclusion Based on these findings, we postulate that high cognitive reserve may result in surprisingly low prevalence estimates of cognitive complaints and dementia in adults over the age of 64 years, which thereby corroborates the longer disability-free life expectancy observed in the Luxembourg population. To the best of our knowledge, this study is the first to report such Luxembourgish public health data. PMID:26390288
NASA Astrophysics Data System (ADS)
Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong
2018-07-01
Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a ‘grafting from’ method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet–visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the ‘grafting from’ method and to fabricate polymer/CNT composite materials.
NASA Astrophysics Data System (ADS)
Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.
2014-03-01
Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic and geological predictors as possible confounders. A spatial block bootstrap was used to obtain non-parametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21 with lower 95% confidence bounds > 13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than one order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.
NASA Astrophysics Data System (ADS)
Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.
2015-01-01
Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic, and geological predictors as possible confounders. A spatial block bootstrap was used to obtain nonparametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21, with lower 95% confidence bounds >13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than 1 order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.
Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong
2018-07-06
Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a 'grafting from' method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet-visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the 'grafting from' method and to fabricate polymer/CNT composite materials.
Active media for up-conversion diode-pumped lasers
NASA Astrophysics Data System (ADS)
Tkachuk, Alexandra M.
1996-03-01
In this work, we consider the different methods of populating the initial and final working levels of laser transitions in TR-doped crystals under the selective 'up-conversion' and 'avalanche' diode laser pumping. On the basis of estimates of the probabilities of competing non-radiative energy-transfer processes rates obtained from the experimental data and theoretical calculations, we estimated the efficiency of the up-conversion pumping and selfquenching of the upper TR3+ states excited by laser-diode emission. The effect of the host composition, dopant concentration, and temperature on the output characteristics and up-conversion processes in YLF:Er; BaY2F8:Er; BaY2F8:Er,Yb and BaY2F8:Yb,Ho are determined.
Estimating Missing Features to Improve Multimedia Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagherjeiran, A; Love, N S; Kamath, C
Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features.more » In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.« less
Osmium Isotopic Evolution of the Mantle Sources of Precambrian Ultramafic Rocks
NASA Astrophysics Data System (ADS)
Gangopadhyay, A.; Walker, R. J.
2006-12-01
The Os isotopic composition of the modern mantle, as recorded collectively by ocean island basalts, mid- oceanic ridge basalts (MORB) and abyssal peridotites, is evidently highly heterogeneous (γ Os(I) ranging from <-10 to >+25). One important question, therefore, is how and when the Earth's mantle developed such large-scale Os isotopic heterogeneities. Previous Os isotopic studies of ancient ultramafic systems, including komatiites and picrites, have shown that the Os isotopic heterogeneity of the terrestrial mantle can be traced as far back as the late-Archean (~ 2.7-2.8 Ga). This observation is based on the initial Os isotopic ratios obtained for the mantle sources of some of the ancient ultramafic rocks determined through analyses of numerous Os-rich whole-rock and/or mineral samples. In some cases, the closed-system behavior of these ancient ultramafic rocks was demonstrated via the generation of isochrons of precise ages, consistent with those obtained from other radiogenic isotopic systems. Thus, a compilation of the published initial ^{187}Os/^{188}Os ratios reported for the mantle sources of komatiitic and picritic rocks is now possible that covers a large range of geologic time spanning from the Mesozoic (ca. 89 Ma Gorgona komatiites) to the Mid-Archean (e.g., ca. 3.3 Ga Commondale komatiites), which provides a comprehensive picture of the Os isotopic evolution of their mantle sources through geologic time. Several Precambrian komatiite/picrite systems are characterized by suprachondritic initial ^{187}Os/^{188}Os ratios (e.g., Belingwe, Kostomuksha, Pechenga). Such long-term enrichments in ^{187}Os of the mantle sources for these rocks may be explained via recycling of old mafic oceanic crust or incorporation of putative suprachondritic outer core materials entrained into their mantle sources. The relative importance of the two processes for some modern mantle-derived systems (e.g., Hawaiian picrites) is an issue of substantial debate. Importantly, however, the high-precision initial Os isotopic compositions of the majority of ultramafic systems show strikingly uniform initial ^{187}Os/^{188}Os ratios, consistent with their derivation from sources that had Os isotopic evolution trajectory very similar to that of carbonaceous chondrites. In addition, the Os isotopic evolution trajectories of the mantle sources for most komatiites show resolvably lower average Re/Os than that estimated for the Primitive Upper Mantle (PUM), yet significantly higher than that obtained in some estimates for the modern convecting upper mantle, as determined via analyses of abyssal peridotites. One possibility is that most of the komatiites sample mantle sources that are unique relative to the sources of abyssal peridotites and MORB. Previous arguments that komatiites originate via large extents of partial melting of relatively deep upper mantle, or even lower mantle materials could, therefore, implicate a source that is different from the convecting upper mantle. If so, this source is remarkably uniform in its long-term Re/Os, and it shows moderate depletion in Re relative to the PUM. Alternatively, if the komatiites are generated within the convective upper mantle through relatively large extents of partial melting, they may provide a better estimate of the Os isotopic composition of the convective upper mantle than that obtained via analyses of MORB, abyssal peridotites and ophiolites.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Conditional flood frequency and catchment state: a simulation approach
NASA Astrophysics Data System (ADS)
Brettschneider, Marco; Bourgin, François; Merz, Bruno; Andreassian, Vazken; Blaquiere, Simon
2017-04-01
Catchments have memory and the conditional flood frequency distribution for a time period ahead can be seen as non-stationary: it varies with the catchment state and climatic factors. From a risk management perspective, understanding the link of conditional flood frequency to catchment state is a key to anticipate potential periods of higher flood risk. Here, we adopt a simulation approach to explore the link between flood frequency obtained by continuous rainfall-runoff simulation and the initial state of the catchment. The simulation chain is based on i) a three state rainfall generator applied at the catchment scale, whose parameters are estimated for each month, and ii) the GR4J lumped rainfall-runoff model, whose parameters are calibrated with all available data. For each month, a large number of stochastic realizations of the continuous rainfall generator for the next 12 months are used as inputs for the GR4J model in order to obtain a large number of stochastic realizations for the next 12 months. This process is then repeated for 50 different initial states of the soil moisture reservoir of the GR4J model and for all the catchments. Thus, 50 different conditional flood frequency curves are obtained for the 50 different initial catchment states. We will present an analysis of the link between the catchment states, the period of the year and the strength of the conditioning of the flood frequency compared to the unconditional flood frequency. A large sample of diverse catchments in France will be used.
NASA Astrophysics Data System (ADS)
Palanisamy, H.; Cazenave, A. A.
2017-12-01
The global mean sea level budget is revisited over two time periods: the entire altimetry era, 1993-2015 and the Argo/GRACE era, 2003-2015 using the version '0' of sea level components estimated by the SLBC-CCI teams. The SLBC-CCI is an European Space Agency's project on sea level budget closure using CCI products. Over the entire altimetry era, the sea level budget was performed as the sum of steric and mass components that include contributions from total land water storage, glaciers, ice sheets (Greenland and Antarctica) and total water vapor content. Over the Argo/GRACE era, it was performed as the sum of steric and GRACE based ocean mass. Preliminary budget analysis performed over the altimetry era (1993-2015) results in a trend value of 2.83 mm/yr. On comparison with the observed altimetry-based global mean sea level trend over the same period (3.03 ± 0.5 mm/yr), we obtain a residual of 0.2 mm/yr. In spite of a residual of 0.2 mm/yr, the sea level budget result obtained over the altimetry era is very promising as this has been performed using the version '0' of the sea level components. Furthermore, uncertainties are not yet included in this study as uncertainty estimation for each sea level component is currently underway. Over the Argo/GRACE era (2003-2015), the trend estimated from the sum of steric and GRACE ocean mass amounts to 2.63 mm/yr while that observed by satellite altimetry is 3.37 mm/yr, thereby leaving a residual of 0.7 mm/yr. Here an ensemble GRACE ocean mass data (mean of various available GRACE ocean mass data) was used for the estimation. Using individual GRACE data results in a residual range of 0.5 mm/yr -1.1 mm/yr. Investigations are under way to determine the cause of the vast difference between the observed sea level and the sea level obtained from steric and GRACE ocean mass. One main suspect is the impact of GRACE data gaps on sea level budget analysis due to lack of GRACE data over several months since 2011. The current action plan of the project is to work on an accurate closure of the sea level budget using both the above performed methodologies. We also intend to provide a standardized uncertainty estimation and to correctly identify the causes leading to sea level budget non-closure if that is the case.
On Consistency Test Method of Expert Opinion in Ecological Security Assessment
Wang, Lihong
2017-01-01
To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert’s individual judgment level, ability and the consistency of the expert’s overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment. PMID:28869570
NASA Astrophysics Data System (ADS)
Taroni, Paola; Paganoni, Anna Maria; Ieva, Francesca; Pifferi, Antonio; Quarto, Giovanna; Abbate, Francesca; Cassano, Enrico; Cubeddu, Rinaldo
2017-01-01
Several techniques are being investigated as a complement to screening mammography, to reduce its false-positive rate, but results are still insufficient to draw conclusions. This initial study explores time domain diffuse optical imaging as an adjunct method to classify non-invasively malignant vs benign breast lesions. We estimated differences in tissue composition (oxy- and deoxyhemoglobin, lipid, water, collagen) and absorption properties between lesion and average healthy tissue in the same breast applying a perturbative approach to optical images collected at 7 red-near infrared wavelengths (635-1060 nm) from subjects bearing breast lesions. The Discrete AdaBoost procedure, a machine-learning algorithm, was then exploited to classify lesions based on optically derived information (either tissue composition or absorption) and risk factors obtained from patient’s anamnesis (age, body mass index, familiarity, parity, use of oral contraceptives, and use of Tamoxifen). Collagen content, in particular, turned out to be the most important parameter for discrimination. Based on the initial results of this study the proposed method deserves further investigation.
NASA Astrophysics Data System (ADS)
Coelho, Flavio Codeço; Carvalho, Luiz Max De
2015-12-01
Quantifying the attack ratio of disease is key to epidemiological inference and public health planning. For multi-serotype pathogens, however, different levels of serotype-specific immunity make it difficult to assess the population at risk. In this paper we propose a Bayesian method for estimation of the attack ratio of an epidemic and the initial fraction of susceptibles using aggregated incidence data. We derive the probability distribution of the effective reproductive number, Rt, and use MCMC to obtain posterior distributions of the parameters of a single-strain SIR transmission model with time-varying force of infection. Our method is showcased in a data set consisting of 18 years of dengue incidence in the city of Rio de Janeiro, Brazil. We demonstrate that it is possible to learn about the initial fraction of susceptibles and the attack ratio even in the absence of serotype specific data. On the other hand, the information provided by this approach is limited, stressing the need for detailed serological surveys to characterise the distribution of serotype-specific immunity in the population.
On Consistency Test Method of Expert Opinion in Ecological Security Assessment.
Gong, Zaiwu; Wang, Lihong
2017-09-04
To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2009-01-01
A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.
Quantification of soil water retention parameters using multi-section TDR-waveform analysis
NASA Astrophysics Data System (ADS)
Baviskar, S. M.; Heimovaara, T. J.
2017-06-01
Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.
Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador
NASA Astrophysics Data System (ADS)
Bishop, J. W.; Lees, J. M.; Ruiz, M. C.
2017-12-01
Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.
NASA Astrophysics Data System (ADS)
Lin, Hou-Yuan; Zhao, Chang-Yin
2018-01-01
The rotational state of Envisat is re-estimated using the specular glint times in optical observation data obtained from 2013 to 2015. The model is simplified to a uniaxial symmetric model with the first order variation of its angular momentum subject to a gravity-gradient torque causing precession around the normal of the orbital plane. The sense of Envisat's rotation can be derived from observational data, and is found to be opposite to the sense of its orbital motion. The rotational period is estimated to be (120.674 ± 0.068) · exp((4.5095 ± 0.0096) ×10-4 · t) s , where t is measured in days from the beginning of 2013. The standard deviation is 0.760 s, making this the best fit obtained for Envisat in the literature to date. The results demonstrate that the angle between the angular momentum vector and the negative normal of the orbital plane librates around a mean value of 8.53 ° ± 0.42 ° with an amplitude from about 0.7 ° (in 2013) to 0.5 ° (in 2015), with the libration period equal to the precession period of the angular momentum, from about 4.8 days (in 2013) to 3.4 days (in 2015). The ratio of the minimum to maximum principal moments of inertia is estimated to be 0.0818 ± 0.0011 , and the initial longitude of the angular momentum in the orbital coordinate system is 40.5 ° ± 9.3 ° . The direction of the rotation axis derived from our results at September 23, 2013, UTC 20:57 is similar to the results obtained from satellite laser ranging data but about 20 ° closer to the negative normal of the orbital plane.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Natarajan, A T; Santos, S J; Darroudi, F; Hadjidikova, V; Vermeulen, S; Chatterjee, S; Berg, M; Grigorova, M; Sakamoto-Hojo, E T; Granath, F; Ramalho, A T; Curado, M P
1998-05-25
The radiation accident in focus here occurred in a section of Goiânia (Brazil) where more than a hundred individuals were contaminated with 137Cesium on September 1987. In order to estimate the absorbed radiation doses, initial frequencies of dicentrics and rings were determined in 129 victims [A.T. Ramalho, PhD Thesis, Subsidios a tecnica de dosimetria citogenetica gerados a partir da analise de resultados obtidos com o acidente radiologico de Goiânia, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil, 1992]. We have followed some of these victims cytogenetically over the years seeking for parameters that could be used as basis for retrospective radiation dosimetry. Our data on translocation frequencies obtained by fluorescence in situ hybridization (FISH) could be directly compared to the baseline frequencies of dicentrics available for those same victims. Our results provided valuable information on how precise these estimates are. The frequencies of translocations observed years after the radiation exposure were two to three times lower than the initial dicentrics frequencies, the differences being larger at higher doses (>1 Gy). The accuracy of such dose estimates might be increased by scoring sufficient amount of cells. However, factors such as the persistence of translocation carrying lymphocytes, translocation levels not proportional to chromosome size, and inter-individual variation reduce the precision of these estimates. Copyright 1998 Elsevier Science B.V. All rights reserved.
Uncertainties and Systematic Effects on the estimate of stellar masses in high z galaxies
NASA Astrophysics Data System (ADS)
Salimbeni, S.; Fontana, A.; Giallongo, E.; Grazian, A.; Menci, N.; Pentericci, L.; Santini, P.
2009-05-01
We discuss the uncertainties and the systematic effects that exist in the estimates of the stellar masses of high redshift galaxies, using broad band photometry, and how they affect the deduced galaxy stellar mass function. We use at this purpose the latest version of the GOODS-MUSIC catalog. In particular, we discuss the impact of different synthetic models, of the assumed initial mass function and of the selection band. Using Chariot & Bruzual 2007 and Maraston 2005 models we find masses lower than those obtained from Bruzual & Chariot 2003 models. In addition, we find a slight trend as a function of the mass itself comparing these two mass determinations with that from Bruzual & Chariot 2003 models. As consequence, the derived galaxy stellar mass functions show diverse shapes, and their slope depends on the assumed models. Despite these differences, the overall results and scenario is observed in all these cases. The masses obtained with the assumption of the Chabrier initial mass function are in average 0.24 dex lower than those from the Salpeter assumption, at all redshifts, causing a shift of galaxy stellar mass function of the same amount. Finally, using a 4.5 μm-selected sample instead of a Ks-selected one, we add a new population of highly absorbed, dusty galaxies at z~=2-3 of relatively low masses, yielding stronger constraints on the slope of the galaxy stellar mass function at lower masses.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
Damughatla, Anirudh R; Raterman, Brian; Sharkey-Toppen, Travis; Jin, Ning; Simonetti, Orlando P; White, Richard D; Kolipaka, Arunark
2015-01-01
To determine the correlation in abdominal aortic stiffness obtained using magnetic resonance elastography (MRE) (μ(MRE)) and MRI-based pulse wave velocity (PWV) shear stiffness (μ(PWV)) estimates in normal volunteers of varying age, and also to determine the correlation between μ(MRE) and μ(PWV). In vivo aortic MRE and MRI were performed on 21 healthy volunteers with ages ranging from 18 to 65 years to obtain wave and velocity data along the long axis of the abdominal aorta. The MRE wave images were analyzed to obtain mean stiffness and the phase contrast images were analyzed to obtain PWV measurements and indirectly estimate stiffness values from the Moens-Korteweg equation. Both μ(MRE) and μ(PWV) measurements increased with age, demonstrating linear correlations with R(2) values of 0.81 and 0.67, respectively. Significant difference (P ≤ 0.001) in mean μ(MRE) and μ(PWV) between young and old healthy volunteers was also observed. Furthermore, a poor linear correlation of R(2) value of 0.43 was determined between μ(MRE) and μ(PWV) in the initial pool of volunteers. The results of this study indicate linear correlations between μ(MRE) and μ(PWV) with normal aging of the abdominal aorta. Significant differences in mean μ(MRE) and μ(PWV) between young and old healthy volunteers were observed. © 2013 Wiley Periodicals, Inc.
Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun
2015-11-01
New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.
NASA Astrophysics Data System (ADS)
Sumin, V. I.; Smolentseva, T. E.; Belokurov, S. V.; Lankin, O. V.
2018-03-01
In the work the process of formation of trainee characteristics with their subsequent change is analyzed and analyzed. Characteristics of trainees were obtained as a result of testing for each section of information on the chosen discipline. The results obtained during testing were input to the dynamic system. The area of control actions consisting of elements of the dynamic system is formed. The limit of deterministic predictability of element trajectories in dynamical systems based on local or global attractors is revealed. The dimension of the phase space of the dynamic system is determined, which allows estimating the parameters of the initial system. On the basis of time series of observations, it is possible to determine the predictability interval of all parameters, which make it possible to determine the behavior of the system discretely in time. Then the measure of predictability will be the sum of Lyapunov’s positive indicators, which are a quantitative measure for all elements of the system. The components for the formation of an algorithm allowing to determine the correlation dimension of the attractor for known initial experimental values of the variables are revealed. The generated algorithm makes it possible to carry out an experimental study of the dynamics of changes in the trainee’s parameters with initial uncertainty.
Derived Born cross sections of e+e‑ annihilation into open charm mesons from CLEO-c measurements
NASA Astrophysics Data System (ADS)
Dong, Xiang-Kun; Wang, Liang-Liang; Yuan, Chang-Zheng
2018-04-01
The exclusive Born cross sections of the production of D0, D+ and {{{D}}}{{s}}{{+}} mesons in e+e‑ annihilation at 13 energy points between 3.970 and 4.260 GeV are obtained by applying corrections for initial state radiation and vacuum polarization to the observed cross sections measured by the CLEO-c experiment. Both the statistical and the systematic uncertainties for the obtained Born cross sections are estimated. Supported in part by National Natural Science Foundation of China (NSFC) (11235011, 11475187, 11521505, U1632106), the Ministry of Science and Technology of China (2015CB856701), Key Research Program of Frontier Sciences, CAS, (QYZDJ-SSW-SLH011) and the CAS Center for Excellence in Particle Physics (CCEPP)
Resource Constrained Planning of Multiple Projects with Separable Activities
NASA Astrophysics Data System (ADS)
Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya
In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.
Quantifying Overdiagnosis in Cancer Screening: A Systematic Review to Evaluate the Methodology.
Ripping, Theodora M; Ten Haaf, Kevin; Verbeek, André L M; van Ravesteyn, Nicolien T; Broeders, Mireille J M
2017-10-01
Overdiagnosis is the main harm of cancer screening programs but is difficult to quantify. This review aims to evaluate existing approaches to estimate the magnitude of overdiagnosis in cancer screening in order to gain insight into the strengths and limitations of these approaches and to provide researchers with guidance to obtain reliable estimates of overdiagnosis in cancer screening. A systematic review was done of primary research studies in PubMed that were published before January 1, 2016, and quantified overdiagnosis in breast cancer screening. The studies meeting inclusion criteria were then categorized by their methods to adjust for lead time and to obtain an unscreened reference population. For each approach, we provide an overview of the data required, assumptions made, limitations, and strengths. A total of 442 studies were identified in the initial search. Forty studies met the inclusion criteria for the qualitative review. We grouped the approaches to adjust for lead time in two main categories: the lead time approach and the excess incidence approach. The lead time approach was further subdivided into the mean lead time approach, lead time distribution approach, and natural history modeling. The excess incidence approach was subdivided into the cumulative incidence approach and early vs late-stage cancer approach. The approaches used to obtain an unscreened reference population were grouped into the following categories: control group of a randomized controlled trial, nonattenders, control region, extrapolation of a prescreening trend, uninvited groups, adjustment for the effect of screening, and natural history modeling. Each approach to adjust for lead time and obtain an unscreened reference population has its own strengths and limitations, which should be taken into consideration when estimating overdiagnosis. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A
2018-06-01
Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Passive characterization of hydrofracture properties using signals from the hydraulic pumps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rector, J.W. III; Dong, Qichen
1995-12-31
In this study we utilize conical shear wave arrivals recorded in geophone observation wells to characterize a hydrofracture performed in the South Belridge Diatomite oil field. The conical wave arrivals are initially created by the hydraulic pumps on the surface, which send tube waves down the treatment borehole. Since the tube wave velocity in the Diatomite is greater than the shear formation velocity (the shear velocity in the diatomite is about 2,200 ft/s) cortical shear waves are radiated into the formation by the tube waves traveling down the treatment borehole. We use the decrease in amplitude of the tube wavemore » as it passes through the fracture zone to image changes in hydraulic conductivity of the fracture. By combining this information with estimates of the fracture height we obtain estimates of fracture width changes over time using the model of Tang and Cheng (1993). We find an excellent qualitative agreement between tube wave attenuation and pump pressure over time. Fracture widths estimated from the Tang and Cheng model appear to be consistent with the volume of injected fluid and the known length of the hydrofracture. Provided a monitor well can be instrumented, this technique holds potential for obtaining a relatively inexpensive real-time characterization of hydrofracs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinkamp, J. A.; Hansen, K. M.; Wilson, J. S.
1976-08-01
This report summarizes results of preliminary experiments to develop cytological and biochemical indicators for estimating damage to respiratory epithelium exposed to toxic agents associated with the by-products of nonnuclear energy production using advanced flow-systems cell-analysis technologies. Since initiation of the program one year ago, progress has been made in obtaining adequate numbers of exfoliated lung cells from the Syrian hamster for flow analysis; cytological techniques developed on human exfoliated gynecological samples have been adapted to hamster lung epithelium for obtaining single-cell suspensions; and lung-cell samples have been initially characterized based on DNA content, total protein, nuclear and cytoplasmic size, andmore » multiangle light-scatter measurements. Preliminary results from measurements of the above parameters which recently became available are described in this report. As the flow-systems technology is adapted further to analysis of exfoliated lung cells, measurements of changes in physical and biochemical cellular properties as a function of exposure to toxic agents will be performed.« less
Concentration history during pumping from a leaky aquifer with stratified initial concentration
Goode, Daniel J.; Hsieh, Paul A.; Shapiro, Allen M.; Wood, Warren W.; Kraemer, Thomas F.
1993-01-01
Analytical and numerical solutions are employed to examine the concentration history of a dissolved substance in water pumped from a leaky aquifer. Many aquifer systems are characterized by stratification, for example, a sandy layer overlain by a clay layer. To obtain information about separate hydrogeologic units, aquifer pumping tests are often conducted with a well penetrating only one of the layers. When the initial concentration distribution is also stratified (the concentration varies with elevation only), the concentration breakthrough in the pumped well may be interpreted to provide information on aquifer hydraulic and transport properties. To facilitate this interpretation, we present some simple analytical and numerical solutions for limiting cases and illustrate their application to a fractured bedrock/glacial drift aquifer system where the solute of interest is dissolved radon gas. In addition to qualitative information on water source, this method may yield estimates of effective porosity and saturated thickness (or fracture transport aperture) from a single-hole test. Little information about dispersivity is obtained because the measured concentration is not significantly affected by dispersion in the aquifer.
Orr, Raymond; Calhoun, Darren; Noonan, Carolyn; Whitener, Ron; Henderson, Jeff; Goldberg, Jack; Henderson, Patrica Nez
2013-01-01
The consequences of starting smoking by age 18 are significant. Early smoking initiation is associated with higher tobacco dependence, increased difficulty in smoking cessation and more negative health outcomes. The purpose of this study is to examine how closely smoking initiation in a well-defined population of American Indians (AI) resembles a group of Non-Hispanic white (NHW) populations born over an 80 year period. We obtained data on age of smoking initiation among 7,073 AIs who were members of 13 tribes in Arizona, Oklahoma and North and South Dakota from the 1988 Strong Heart Study (SHS) and the 2001 Strong Heart Family Study (SHFS) and 19,747 NHW participants in the 2003 National Health Interview Survey. The participants were born as early as 1904 and as late as 1985. We classified participants according to birth cohort by decade, sex, and for AIs, according to location. We estimated the cumulative incidence of smoking initiation by age 18 in each sex and birth cohort group in both AIs and NHWs and used Cox regression to estimate hazard ratios for the association of birth cohort, sex and region with the age at smoking initiation. We found that the cumulative incidence of smoking initiation by age 18 was higher in males than females in all SHS regions and in NHWs (p < 0.001). Our results show regional variation of age of initiation significant in the SHS (p < 0.001). Our data showed that not all AIs (in this sample) showed similar trends toward increased earlier smoking. For instance, Oklahoma SHS male participants born in the 1980s initiated smoking before age 18 less often than those born before 1920 by a ratio of 0.7. The results showed significant variation in age of initiation across sex, birth cohort, and location. Our preliminary analyses suggest that AI smoking trends are not uniform across region or gender but are likely shaped by local context. If tobacco prevention and control programs depend in part on addressing the origin of AI smoking it may be helpful to increase the awareness in regional differences. PMID:23644825
Obtaining Crack-free WC-Co Alloys by Selective Laser Melting
NASA Astrophysics Data System (ADS)
Khmyrov, R. S.; Safronov, V. A.; Gusarov, A. V.
Standard hardmetals of WC-Co system are brittle and often crack at selective laser melting (SLM). The objective of this study is to estimate the range of WC/Co ratio where cracking can be avoided. Micron-sized Co powder was mixed with WC nanopowder in a ball mill to obtain uniform distribution of WC over the surface of Co particles. Continuous layers of remelted material on the surface of a hardmetal plate were obtained from this composite powder by SLM at 1.07μm wavelength. The layers have satisfactory porosity and are well bound to the substrate. The chemical composition of the layers matches the composition of the initial powder mixtures. The powder mixture with 25wt.%WC can be used for SLM to obtain materials without cracks. The powder mixture with 50wt.%WC cracks because of formation of brittle W3Co3C phase. Cracking can considerably reduce the mechanical strength, so that the use of this composition is not advised.
Taran, Iu A; Cihpev, K K; Stroganov, L B
1977-01-01
Kinetics of the model reaction between oligomeric planar lattice-model chains has been studied by Monte--Carlo method. Simulation of the chain's motion was performing using rules of Verdier--Stockmayer. The length of chains has been varied from 8 to 24 beads. The probabilities of breaking of a contact between two chains was given by w=exp(--U); the formation of an adjacent contact was controlled by mobility of chains. The probability of the formation of any isolated contact was given by w0=exp(--U0). Kinetic curves were obtained for mean number of contacts Z(t) with different initial conditions and U, U0 values. The estimation of mean rates of formation-breaking of contacts (V+ and V-) and their dependences on the time, U and U0 have been obtained. Rate constants for the formation-breaking of a contact (k+ and k-) were estimated as well as the distribution for k+/- over states of the binary complex. The calculations were made for the case of homopolymers, intrachain interactions were omitted.
Continued observations of the H Ly alpha emission from Uranus
NASA Technical Reports Server (NTRS)
Clarke, J.; Durrance, S.; Moos, W.; Murthy, J.; Atreya, S.; Barnes, A.; Mihalov, J.; Belcher, J.; Festou, M.; Imhoff, C.
1986-01-01
Observations of Uranus obtained over four years with the IUE Observatory supports the initial identification of a bright H Ly alpha flux which varies independently of the solar H Ly alpha flux, implying a largely self-excited emission. An average brightness of 1400 Rayleighs is derived, and limits for the possible contribution by reflected solar H Ly alpha emission, estimated to be about 200 Rayleighs, suggest that the remaining self-excited emission is produced by an aurora. Based on comparison with solar wind measurements obtained in the vicinity of Uranus by Voyager 2 and Pioneer 11, no evidence for correlation between the solar wind density and the H Ly alpha brightness is found. The upper limit to H2 emission gives a lower limit to the ratio of H Ly alpha/H2 emissions of about 2.4, suggesting that the precipitating particles may be significantly less energetic on Uranus than those responsible for the aurora on Jupiter. The average power in precipitating particles is estimated to be of the order of 10 to the 12th W.
Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R
2014-01-01
Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.
NASA Astrophysics Data System (ADS)
Le, Nam Q.
2018-05-01
We obtain the Hölder regularity of time derivative of solutions to the dual semigeostrophic equations in two dimensions when the initial potential density is bounded away from zero and infinity. Our main tool is an interior Hölder estimate in two dimensions for an inhomogeneous linearized Monge-Ampère equation with right hand side being the divergence of a bounded vector field. As a further application of our Hölder estimate, we prove the Hölder regularity of the polar factorization for time-dependent maps in two dimensions with densities bounded away from zero and infinity. Our applications improve previous work by G. Loeper who considered the cases of densities sufficiently close to a positive constant.
And the first one now will later be last: Time-reversal in cormack-jolly-seber models
Nichols, James D.
2016-01-01
The models of Cormack, Jolly and Seber (CJS) are remarkable in providing a rich set of inferences about population survival, recruitment, abundance and even sampling probabilities from a seemingly limited data source: a matrix of 1's and 0's reflecting animal captures and recaptures at multiple sampling occasions. Survival and sampling probabilities are estimated directly in CJS models, whereas estimators for recruitment and abundance were initially obtained as derived quantities. Various investigators have noted that just as standard modeling provides direct inferences about survival, reversing the time order of capture history data permits direct modeling and inference about recruitment. Here we review the development of reverse-time modeling efforts, emphasizing the kinds of inferences and questions to which they seem well suited.
NASA Astrophysics Data System (ADS)
Rahman, P. A.; D'K Novikova Freyre Shavier, G.
2018-03-01
This scientific paper is devoted to the analysis of the mean time to data loss of redundant disk arrays RAID-6 with alternation of data considering different failure rates of disks both in normal state of the disk array and in degraded and rebuild states, and also nonzero time of the disk replacement. The reliability model developed by the authors on the basis of the Markov chain and obtained calculation formula for estimation of the mean time to data loss (MTTDL) of the RAID-6 disk arrays are also presented. At last, the technique of estimation of the initial reliability parameters and examples of calculation of the MTTDL of the RAID-6 disk arrays for the different numbers of disks are also given.
An interactive program for pharmacokinetic modeling.
Lu, D R; Mao, F
1993-05-01
A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.
NASA Astrophysics Data System (ADS)
Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.
2017-10-01
A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
Rigatos, Gerasimos G; Rigatou, Efthymia G; Djida, Jean Daniel
2015-10-01
A method for early diagnosis of parametric changes in intracellular protein synthesis models (e.g. the p53 protein - mdm2 inhibitor model) is developed with the use of a nonlinear Kalman Filtering approach (Derivative-free nonlinear Kalman Filter) and of statistical change detection methods. The intracellular protein synthesis dynamic model is described by a set of coupled nonlinear differential equations. It is shown that such a dynamical system satisfies differential flatness properties and this allows to transform it, through a change of variables (diffeomorphism), to the so-called linear canonical form. For the linearized equivalent of the dynamical system, state estimation can be performed using the Kalman Filter recursion. Moreover, by applying an inverse transformation based on the previous diffeomorphism it becomes also possible to obtain estimates of the state variables of the initial nonlinear model. By comparing the output of the Kalman Filter (which is assumed to correspond to the undistorted dynamical model) with measurements obtained from the monitored protein synthesis system, a sequence of differences (residuals) is obtained. The statistical processing of the residuals with the use of x2 change detection tests, can provide indication within specific confidence intervals about parametric changes in the considered biological system and consequently indications about the appearance of specific diseases (e.g. malignancies).
Iterative initial condition reconstruction
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias
2017-07-01
Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
Parametric Modeling as a Technology of Rapid Prototyping in Light Industry
NASA Astrophysics Data System (ADS)
Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.
2016-04-01
The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.
Improved Estimates of Temporally Coherent Internal Tides and Energy Fluxes from Satellite Altimetry
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)
2002-01-01
Satellite altimetry has opened a surprising new avenue to observing internal tides in the open ocean. The tidal surface signatures are very small, a few cm at most, but in many areas they are robust, owing to averaging over many years. By employing a simplified two dimensional wave fitting to the surface elevations in combination with climatological hydrography to define the relation between the surface height and the current and pressure at depth, we may obtain rough estimates of internal tide energy fluxes. Initial results near Hawaii with Topex/Poseidon (T/P) data show good agreement with detailed 3D (three dimensional) numerical models, but the altimeter picture is somewhat blurred owing to the widely spaced T/P tracks. The resolution may be enhanced somewhat by using data from the ERS-1 (ESA (European Space Agency) Remote Sensing) and ERS-2 satellite altimeters. The ERS satellite tracks are much more closely spaced (0.72 deg longitude vs. 2.83 deg for T/P), but the tidal estimates are less accurate than those for T/P. All altimeter estimates are also severely affected by noise in regions of high mesoscale variability, and we have obtained some success in reducing this contamination by employing a prior correction for mesoscale variability based on ten day detailed sea surface height maps developed by Le Traon and colleagues. These improvements allow us to more clearly define the internal tide surface field and the corresponding energy fluxes. Results from throughout the global ocean will be presented.
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.
Frick, Eric; Rahmatalla, Salam
2018-04-04
The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.
SN 2012ec: mass of the progenitor from PESSTO follow-up of the photospheric phase
NASA Astrophysics Data System (ADS)
Barbarino, C.; Dall'Ora, M.; Botticella, M. T.; Della Valle, M.; Zampieri, L.; Maund, J. R.; Pumo, M. L.; Jerkstrand, A.; Benetti, S.; Elias-Rosa, N.; Fraser, M.; Gal-Yam, A.; Hamuy, M.; Inserra, C.; Knapic, C.; LaCluyze, A. P.; Molinaro, M.; Ochner, P.; Pastorello, A.; Pignata, G.; Reichart, D. E.; Ries, C.; Riffeser, A.; Schmidt, B.; Schmidt, M.; Smareglia, R.; Smartt, S. J.; Smith, K.; Sollerman, J.; Sullivan, M.; Tomasella, L.; Turatto, M.; Valenti, S.; Yaron, O.; Young, D.
2015-04-01
We present the results of a photometric and spectroscopic monitoring campaign of SN 2012ec, which exploded in the spiral galaxy NGC 1084, during the photospheric phase. The photometric light curve exhibits a plateau with luminosity L = 0.9 × 1042 erg s-1 and duration ˜90 d, which is somewhat shorter than standard Type II-P supernovae (SNe). We estimate the nickel mass M(56Ni) = 0.040 ± 0.015 M⊙ from the luminosity at the beginning of the radioactive tail of the light curve. The explosion parameters of SN 2012ec were estimated from the comparison of the bolometric light curve and the observed temperature and velocity evolution of the ejecta with predictions from hydrodynamical models. We derived an envelope mass of 12.6 M⊙, an initial progenitor radius of 1.6 × 1013 cm and an explosion energy of 1.2 foe. These estimates agree with an independent study of the progenitor star identified in pre-explosion images, for which an initial mass of M = 14-22 M⊙ was determined. We have applied the same analysis to two other Type II-P SNe (SNe 2012aw and 2012A), and carried out a comparison with the properties of SN 2012ec derived in this paper. We find a reasonable agreement between the masses of the progenitors obtained from pre-explosion images and masses derived from hydrodynamical models. We estimate the distance to SN 2012ec with the standardized candle method (SCM) and compare it with other estimates based on other primary and secondary indicators. SNe 2012A, 2012aw and 2012ec all follow the standard relations for the SCM for the use of Type II-P SNe as distance indicators.
NASA Astrophysics Data System (ADS)
Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.
2016-04-01
Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows to identify, forecast and explain watershed response. For that purpose, the Natural Resources Conservation Service Curve Number method (NRCS CN) is the conceptual lumped model more recognized in the field of rainfall-runoff estimation. Furthermore, there is still an ongoing discussion about the procedure to determine the portion of rainfall retained in the watershed before runoff is generated, called as initial abstractions. This concept is computed as a ratio (λ) of the soil potential maximum retention S of the watershed. Initially, this ratio was assumed to be 0.2, but later it has been proposed to be modified to 0.05. However, the actual procedures to convert NRCS CN model parameters obtained under a different hypothesis about λ do not incorporate any adaptation of climatic conditions of each watershed. By this reason, we propose a new simple method for computing model parameters which is adapted to local conditions taking into account regional patterns of climate conditions. After checking the goodness of this procedure against the actual ones in 34 different watersheds located in Ohio and Texas (United States), we concluded that this novel methodology represents the most accurate and efficient alternative to refit the initial abstraction ratio.
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.
1986-01-01
An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.
Bor, Jacob; Tanser, Frank; Newell, Marie-Louise; Bärnighausen, Till
2012-07-01
Antiretroviral therapy for HIV may have important economic benefits for patients and their households. We quantified the impact of HIV treatment on employment status among HIV patients in rural South Africa who were enrolled in a public-sector HIV treatment program supported by the President's Emergency Plan for AIDS Relief. We linked clinical data from more than 2,000 patients in the treatment program with ten years of longitudinal socioeconomic data from a complete community-based population cohort of more than 30,000 adults residing in the clinical catchment area. We estimated the employment effects of HIV treatment in fixed-effects regressions. Four years after the initiation of antiretroviral therapy, employment among HIV patients had recovered to about 90 percent of baseline rates observed in the same patients three to five years before they started treatment. Many patients initiated treatment early enough that they were able to avoid any loss of employment due to HIV. These results represent the first estimates of employment recovery among HIV patients in a general population, relative to the employment levels that these patients had prior to job-threatening HIV illness and the decision to seek care. There are large economic benefits to HIV treatment. For some patients, further gains could be obtained from initiating antiretroviral therapy earlier, prior to HIV-related job loss.
Bor, Jacob; Tanser, Frank; Newell, Marie-Louise; Bärnighausen, Till
2013-01-01
Antiretroviral therapy for HIV may have important economic benefits for patients and their households. We quantified the impact of HIV treatment on employment status among HIV patients in rural South Africa who were enrolled in a public-sector HIV treatment program supported by the U.S. President’s Emergency Plan for AIDS Relief. We linked clinical data from more than 2000 patients in the treatment program with ten years of longitudinal socioeconomic data from a complete community-based population cohort of over 30,000 adults residing in the clinical catchment area. We estimated the employment effects of HIV treatment in fixed effects regressions. Four years after the initiation of antiretroviral therapy, employment among HIV patients had recovered to about 90 percent of baseline rates observed in the same patients three to five years before they started treatment. Many patients initiated treatment early enough that they were able to avoid any loss of employment due to HIV. These results represent the first estimates of employment recovery among HIV patients in a general population, relative to the employment levels that these patients had prior to job-threatening illness and the decision to seek care. We find large economic benefits to HIV treatment. For some patients, further gains could be obtained from initiating antiretroviral therapy earlier, prior to HIV-related job loss. PMID:22778335
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
Dynamics of a black-capped chickadee population, 1958-1983
Loery, G.; Nichols, J.D.
1985-01-01
The dynamics of a wintering population of Black-capped Chickadees (Parus atricapillus) were studied from 1958-1983 using capture-recapture methods. The Jolly-Seber model was used to obtain annual estimates of population size, survival rate, and recruitment. The average estimated population size over this period was ?160 birds. The average estimated number of new birds entering the population each year and alive at the time of sampling was ?57. The arithmetic mean annual survival rate estimate was ?0.59. We tested hypothesis about possible relationships between these population parameters and (1) the natural introduction of Tufted Titmice (Parus bicolor) to the area, (2) the clear-cutting of portions of nearby red pine (Pinus resinosa) plantations, and (3) natural variations in winter temperatures. The chickadee population exhibited a substantial short-term decline following titmouse establishment, produced by decreases in both survival rate and number of new recruits. Survival rate decline somewhat after the initiation of the pine clear-cutting, but population size was very similar before and after clear-cutting. Weighted least squares analyses provided no evidence of a relationship between survival rate and either of two winter temperature variables.
NASA Astrophysics Data System (ADS)
Fontaine, G.; Brassard, P.; Dufour, P.; Tremblay, P.-E.
2015-06-01
The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Some important insights into the process may be derived using an approximate approach which combines static stellar models with estimates of diffusion timescales at the base of the outer convection zone or, in its absence, at the photosphere. Until recently, and to our knowledge, values of diffusion timescales in white dwarfs have all been obtained on the basis of the same physics as that developed initially by Paquette et al., including their diffusion coefficients and thermal diffusion coefficients. In view of the recent exciting discoveries of a plethora of metals (including some never seen before) polluting the atmospheres of an increasing number of cool white dwarfs, we felt that a new look at the estimates of settling timescales would be worthwhile. We thus provide improved estimates of diffusion timescales for all 27 elements from Li to Cu in the periodic table in a wide range of the surface gravity-effective temperature domain and for both DA and non-DA stars.
Recovering the 3d Pose and Shape of Vehicles from Stereo Images
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2018-05-01
The precise reconstruction and pose estimation of vehicles plays an important role, e.g. for autonomous driving. We tackle this problem on the basis of street level stereo images obtained from a moving vehicle. Starting from initial vehicle detections, we use a deformable vehicle shape prior learned from CAD vehicle data to fully reconstruct the vehicles in 3D and to recover their 3D pose and shape. To fit a deformable vehicle model to each detection by inferring the optimal parameters for pose and shape, we define an energy function leveraging reconstructed 3D data, image information, the vehicle model and derived scene knowledge. To minimise the energy function, we apply a robust model fitting procedure based on iterative Monte Carlo model particle sampling. We evaluate our approach using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012). Our approach can deal with very coarse pose initialisations and we achieve encouraging results with up to 82 % correct pose estimations. Moreover, we are able to deliver very precise orientation estimation results with an average absolute error smaller than 4°.
Duck nest success in the prairie pothole region
Klett, A.T.; Shaffer, T.L.; Johnson, D.H.
1988-01-01
We estimated nest success of mallard (Anas platyrhynchos), gadwall (A. strepera), blue-winged teal (A. discors), northern shoveler (A. clypeata), and northern pintail (A. acuta) for 5 regions in North Dakota, South Dakota, and Minnesota, for 1-3 periods between 1966 and 1984, and for 8 habitat classes. We obtained composite estimates of nest success for regions and periods by weighting each habitat proportional to the number of nest initiations. The distribution of nest initiations was derived from estimates of breeding populations, preferences of species for nesting habitats, and availability of habitats. Nest success rates ranged from < 5 to 36% among regions, periods, and species. Rates were lowest in western Minnesota (MNW) and eastern North Dakota (NDE), intermediate in central North Dakota (NDC) and eastern South Dakota (SDE), and highest in central South Dakota (SDC). In regions with comparable data, no consistent trend in nest success was apparent from early to late periods. Gadwalls and blue-winged teal nested more successfully than mallards and pintails; the relative success of shovelers varied regionally. Ducks nesting in idle grassland were the most successful and those nesting in cropland were least successful. Mammalian predation was the major cause of nesting failure (54-85%) in all habitats, but farming operations resulted in 37 and 27% of the nesting failures in cropland and hayland, respectively. Most of the populations studied were not self-sustaining.
Riley, Gerald F; Rupp, Kalman
2015-01-01
Objective To estimate cumulative DI, SSI, Medicare, and Medicaid expenditures from initial disability benefit award to death or age 65. Data Sources Administrative records for a cohort of new CY2000 DI and SSI awardees aged 18–64. Study Design Actual expenditures were obtained for 2000–2006/7. Subsequent expenditures were simulated using a regression-adjusted Markov process to assign individuals to annual disability benefit coverage states. Program expenditures were simulated conditional on assigned benefit coverage status. Estimates reflect present value of expenditures at initial award in 2000 and are expressed in constant 2012 dollars. Expenditure estimates were also updated to reflect benefit levels and characteristics of new awardees in 2012. Data Collection We matched records for a 10 percent nationally representative sample. Principal Findings Overall average cumulative expenditures are $292,401 through death or age 65, with 51.4 percent for cash benefits and 48.6 percent for health care. Expenditures are about twice the average for individuals first awarded benefits at age 18–30. Overall average expenditures increased by 10 percent when updated for a simulated 2012 cohort. Conclusions Data on cumulative expenditures, especially combined across programs, are useful for evaluating the long-term payoff of investments designed to modify entry to and exit from the disability rolls. PMID:25109322
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Ludovic; Vaeck, Nathalie; Justum, Yves
2015-04-07
Following a recent proposal of L. Wang and D. Babikov [J. Chem. Phys. 137, 064301 (2012)], we theoretically illustrate the possibility of using the motional states of a Cd{sup +} ion trapped in a slightly anharmonic potential to simulate the single-particle time-dependent Schrödinger equation. The simulated wave packet is discretized on a spatial grid and the grid points are mapped on the ion motional states which define the qubit network. The localization probability at each grid point is obtained from the population in the corresponding motional state. The quantum gate is the elementary evolution operator corresponding to the time-dependent Schrödingermore » equation of the simulated system. The corresponding matrix can be estimated by any numerical algorithm. The radio-frequency field which is able to drive this unitary transformation among the qubit states of the ion is obtained by multi-target optimal control theory. The ion is assumed to be cooled in the ground motional state, and the preliminary step consists in initializing the qubits with the amplitudes of the initial simulated wave packet. The time evolution of the localization probability at the grids points is then obtained by successive applications of the gate and reading out the motional state population. The gate field is always identical for a given simulated potential, only the field preparing the initial wave packet has to be optimized for different simulations. We check the stability of the simulation against decoherence due to fluctuating electric fields in the trap electrodes by applying dissipative Lindblad dynamics.« less
Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches
Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe
2016-01-01
ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506
Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.
Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe
2016-01-01
Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.
Bayesian lead time estimation for the Johns Hopkins Lung Project data.
Jang, Hyejeong; Kim, Seongho; Wu, Dongfeng
2013-09-01
Lung cancer screening using X-rays has been controversial for many years. A major concern is whether lung cancer screening really brings any survival benefits, which depends on effective treatment after early detection. The problem was analyzed from a different point of view and estimates were presented of the projected lead time for participants in a lung cancer screening program using the Johns Hopkins Lung Project (JHLP) data. The newly developed method of lead time estimation was applied where the lifetime T was treated as a random variable rather than a fixed value, resulting in the number of future screenings for a given individual is a random variable. Using the actuarial life table available from the United States Social Security Administration, the lifetime distribution was first obtained, then the lead time distribution was projected using the JHLP data. The data analysis with the JHLP data shows that, for a male heavy smoker with initial screening ages at 50, 60, and 70, the probability of no-early-detection with semiannual screens will be 32.16%, 32.45%, and 33.17%, respectively; while the mean lead time is 1.36, 1.33 and 1.23 years. The probability of no-early-detection increases monotonically when the screening interval increases, and it increases slightly as the initial age increases for the same screening interval. The mean lead time and its standard error decrease when the screening interval increases for all age groups, and both decrease when initial age increases with the same screening interval. The overall mean lead time estimated with a random lifetime T is slightly less than that with a fixed value of T. This result is hoped to be of benefit to improve current screening programs. Copyright © 2013 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.
Iatrogenic radiation exposure to patients with early onset spine and chest wall deformities.
Khorsand, Derek; Song, Kit M; Swanson, Jonathan; Alessio, Adam; Redding, Gregory; Waldhausen, John
2013-08-01
Retrospective cohort series. Characterize average iatrogenic radiation dose to a cohort of children with thoracic insufficiency syndrome (TIS) during assessment and treatment at a single center with vertically expandable prosthetic titanium rib. Children with TIS undergo extensive evaluations to characterize their deformity. No standardized radiographical evaluation exists, but all reports use extensive imaging. The source and level of radiation these patients receive is not currently known. We evaluated a retrospective consecutive cohort of 62 children who had surgical treatment of TIS at our center from 2001-2011. Typical care included obtaining serial radiographs, spine and chest computed tomographic (CT) scans, ventilation/perfusion scans, and magnetic resonance images. Epochs of treatment were divided into time of initial evaluation to the end of initial vertically expandable prosthetic titanium rib implantation with each subsequent epoch delineated by the next surgical intervention. The effective dose for each examination was estimated within millisieverts (mSv). Plain radiographs were calculated from references. Effective dose was directly estimated for CT scans since 2007 and an average of effective dose from 2007-2011 was used for scans before 2007. Effective dose from fluoroscopy was directly estimated. All doses were reported in mSv. A cohort of 62 children had a total of 447 procedures. There were a total of 290 CT scans, 4293 radiographs, 147 magnetic resonance images, and 134 ventilation/perfusion scans. The average accumulated effective dose was 59.6 mSv for children who had completed all treatment, 13.0 mSv up to initial surgery, and 3.2 mSv for each subsequent epoch of treatment. CT scans accounted for 74% of total radiation dose. Children managed for TIS using a consistent protocol received iatrogenic radiation doses that were on average 4 times the estimated average US background radiation exposure of 3 mSv/yr. CT scans comprised 74% of the total dose. 3.
Usmani, S Z; Cavenagh, J D; Belch, A R; Hulin, C; Basu, S; White, D; Nooka, A; Ervin-Haynes, A; Yiu, W; Nagarwala, Y; Berger, A; Pelligra, C G; Guo, S; Binder, G; Gibson, C J; Facon, T
2016-01-01
To conduct a cost-effectiveness assessment of lenalidomide plus dexamethasone (Rd) vs bortezomib plus melphalan and prednisone (VMP) as initial treatment for transplant-ineligible patients with newly-diagnosed multiple myeloma (MM), from a U.S. payer perspective. A partitioned survival model was developed to estimate expected life-years (LYs), quality-adjusted LYs (QALYs), direct costs and incremental costs per QALY and LY gained associated with use of Rd vs VMP over a patient's lifetime. Information on the efficacy and safety of Rd and VMP was based on data from multinational phase III clinical trials and a network meta-analysis. Pre-progression direct costs included the costs of Rd and VMP, treatment of adverse events (including prophylaxis) and routine care and monitoring associated with MM. Post-progression direct costs included costs of subsequent treatment(s) and routine care and monitoring for progressive disease, all obtained from published literature and estimated from a U.S. payer perspective. Utilities were obtained from the aforementioned trials. Costs and outcomes were discounted at 3% annually. Relative to VMP, use of Rd was expected to result in an additional 2.22 LYs and 1.47 QALYs (discounted). Patients initiated with Rd were expected to incur an additional $78,977 in mean lifetime direct costs (discounted) vs those initiated with VMP. The incremental costs per QALY and per LY gained with Rd vs VMP were $53,826 and $35,552, respectively. In sensitivity analyses, results were found to be most sensitive to differences in survival associated with Rd vs VMP, the cost of lenalidomide and the discount rate applied to effectiveness outcomes. Rd was expected to result in greater LYs and QALYs compared with VMP, with similar overall costs per LY for each regimen. Results of this analysis indicated that Rd may be a cost-effective alternative to VMP as initial treatment for transplant-ineligible patients with MM, with an incremental cost-effectiveness ratio well within the levels for recent advancements in oncology.
Martins, Rui; Oliveira, Paulo Eduardo; Schmitt, Aurore
2012-06-10
We discuss here the estimation of age at death from two indicators (pubic symphysis and the sacro-pelvic surface of the ilium) based on four different osteological series from Portugal, Great-Britain, South Africa or USA (European origin). These samples and the scoring system of the two indicators were used by Schmitt et al. (2002), applying the methodology proposed by Lucy et al. (1996). In the present work, the same data was processed using a modification of the empirical method proposed by Lucy et al. (2002). The various probability distributions are estimated from training data by using kernel density procedures and Jackknife methodology. Bayes's theorem is then used to produce the posterior distribution from which point and interval estimates may be made. This statistical approach reduces the bias of the estimates to less than 70% of what was obtained by the initial method. This reduction going up to 52% if knowledge of sex of the individual is available, and produces an age for all the individuals that improves age at death assessment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Estimating phonation threshold pressure.
Fisher, K V; Swank, P R
1997-10-01
Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konopacki, S.; Akbari, H.
2002-02-28
In 1997, the U.S. Environmental Protection Agency (EPA) established the ''Heat Island Reduction Initiative'' to quantify the potential benefits of Heat-Island Reduction (HIR) strategies (i.e., shade trees, reflective roofs, reflective pavements and urban vegetation) to reduce cooling-energy use in buildings, lower the ambient air temperature and improve urban air quality in cities, and reduce CO2 emissions from power plants. Under this initiative, the Urban Heat Island Pilot Project (UHIPP) was created with the objective of investigating the potential of HIR strategies in residential and commercial buildings in three initial UHIPP cities: Baton Rouge, LA; Sacramento, CA; and Salt Lake City,more » UT. Later two other cities, Chicago, IL and Houston, TX were added to the UHIPP. In an earlier report we summarized our efforts to calculate the annual energy savings, peak power avoidance, and annual CO2 reduction obtainable from the introduction of HIR strategies in the initial three cities. This report summarizes the results of our study for Chicago and Houston. In this analysis, we focused on three building types that offer the highest potential savings: single-family residence, office and retail store. Each building type was characterized in detail by vintage and system type (i.e., old and new building constructions, and gas and electric heat). We used the prototypical building characteristics developed earlier for each building type and simulated the impact of HIR strategies on building cooling- and heating-energy use and peak power demand using the DOE-2.1E model. Our simulations included the impact of (1) strategically-placed shade trees near buildings [direct effect], (2) use of high-albedo roofing material on the building [direct effect], (3) urban reforestation with high-albedo pavements and building surfaces [indirect effect] and (4) combined strategies 1, 2, and 3 [direct and indirect effects]. We then estimated the total roof area of air-conditioned buildings in each city using readily obtainable data to calculate the metropolitan-wide impact of HIR strategies. The results show that in Chicago, potential annual energy savings of $30M could be realized by ratepayers from the combined direct and indirect effects of HIR strategies. Additionally, peak power avoidance is estimated at 400 MW and the reduction in annual carbon emissions at 58 ktC. In Houston, the potential annual energy savings are estimated at $82M, with an avoidance of 730 MW in peak power and a reduction in annual carbon emissions of 170 ktC.« less
Economic Outcomes with Anatomic versus Functional Diagnostic Testing for Coronary Artery Disease
Mark, Daniel B.; Federspiel, Jerome J.; Cowper, Patricia A.; Anstrom, Kevin J.; Hoffmann, Udo; Patel, Manesh R.; Davidson-Ray, Linda; Daniels, Melanie R.; Cooper, Lawton S.; Knight, J. David; Lee, Kerry L.; Douglas, Pamela S.
2016-01-01
Background The PROMISE trial found that initial use of ≥64-slice multidetector computed tomographic angiography (CTA) versus functional diagnostic testing strategies did not improve clinical outcomes in stable symptomatic patients with suspected coronary artery disease (CAD) requiring noninvasive testing. Objective Economic analysis of PROMISE, a major secondary aim. Design Prospective economic study from the US perspective. Comparisons were made by intention-to-treat. Confidence intervals were calculated using bootstrap methods. Setting 190 U.S. centers Patients 9649 U.S. patients enrolled in PROMISE. Enrollment began July 2010 and completed September 2013. Median follow-up was 25 months. Measurements Technical costs of the initial (outpatient) testing strategy were estimated from Premier Research Database data. Hospital-based costs were estimated using hospital bills and Medicare cost-to-charge ratios. Physician fees were taken from the Medicare Fee Schedule. Costs were expressed in 2014 US dollars discounted at 3% and estimated out to 3 years using inverse probability weighting methods. Results The mean initial testing costs were: $174 for exercise ECG; $404 for CTA; $501 to $514 for (exercise, pharmacologic) stress echo; $946 to $1132 for (exercise, pharmacologic) stress nuclear. Mean costs at 90 days for the CTA strategy were $2494 versus $2240 for the functional strategy (mean difference $254, 95% CI −$634 to $906). The difference was associated with more revascularizations and catheterizations (4.25 per 100 patients) with CTA use. After 90 days, the mean cost difference between the arms out to 3 years remained small ($373). Limitations Cost weights for test strategies obtained from sources outside PROMISE. Conclusions CTA and functional diagnostic testing strategies in patients with suspected CAD have similar costs through three years of follow-up. PMID:27214597
The maximum economic depth of groundwater abstraction for irrigation
NASA Astrophysics Data System (ADS)
Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.
2017-12-01
Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.
Holm, Astrid Ledgaard; Brønnum-Hansen, Henrik; Robinson, Kirstine Magtengaard; Diderichsen, Finn
2014-07-01
Tobacco smoking is among the leading risk factors for chronic disease and early death in developed countries, including Denmark, where smoking causes 14% of the disease burden. In Denmark, many public health interventions, including smoking prevention, are undertaken by the municipalities, but models to estimate potential health effects of local interventions are lacking. The aim of the current study was to model the effects of decreased smoking prevalence in Copenhagen, Denmark. The DYNAMO-HIA model was applied to the population of Copenhagen, by using health survey data and data from Danish population registers. We modelled the effects of four intervention scenarios aimed at different target groups, compared to a reference scenario. The potential effects of each scenario were modelled until 2040. A combined scenario affecting both initiation rates among youth, and cessation and re-initiation rates among adults, which reduced the smoking prevalence to 4% by 2025, would have large beneficial effects on incidence and prevalence of smoking-related diseases and mortality. Health benefits could also be obtained through interventions targeting only cessation or re-initiation rates, whereas an intervention targeting only initiation among youth had marginal effects on morbidity and mortality within the modelled time frame. By modifying the DYNAMO-HIA model, we were able to estimate the potential health effects of four interventions to reduce smoking prevalence in the population of Copenhagen. The effect of the interventions on future public health depended on population subgroup(s) targeted, duration of implementation and intervention reach. © 2014 the Nordic Societies of Public Health.
Monolayer-crystal streptavidin support films provide an internal standard of cryo-EM image quality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Bong-Gyoon; Watson, Zoe; Cate, Jamie H. D.
Analysis of images of biotinylated Escherichia coli 70S ribosome particles, bound to streptavidin affinity grids, demonstrates that the image-quality of particles can be predicted by the image-quality of the monolayer crystalline support film. Also, the quality of the Thon rings is a good predictor of the image-quality of particles, but only when images of the streptavidin crystals extend to relatively high resolution. When the estimated resolution of streptavidin was 5 Å or worse, for example, the ribosomal density map obtained from 22,697 particles went to only 9.5 Å, while the resolution of the map reached 4.0 Å for the samemore » number of particles, when the estimated resolution of streptavidin crystal was 4 Å or better. It thus is easy to tell which images in a data set ought to be retained for further work, based on the highest resolution seen for Bragg peaks in the computed Fourier transforms of the streptavidin component. The refined density map obtained from 57,826 particles obtained in this way extended to 3.6 Å, a marked improvement over the value of 3.9 Å obtained previously from a subset of 52,433 particles obtained from the same initial data set of 101,213 particles after 3-D classification. These results are consistent with the hypothesis that interaction with the air-water interface can damage particles when the sample becomes too thin. Finally, streptavidin monolayer crystals appear to provide a good indication of when that is the case.« less
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
Global Search Capabilities of Indirect Methods for Impulsive Transfers
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong
2015-09-01
An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.
Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng
2016-09-01
Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.
NASA Astrophysics Data System (ADS)
Carrassi, A.; Weber, R. J. T.; Guemas, V.; Doblas-Reyes, F. J.; Asif, M.; Volpi, D.
2014-04-01
Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate. The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand. Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades. We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed. Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year. Finally, the application of these results in the context of realistic climate models is discussed.
Hyper-X Mach 10 Trajectory Reconstruction
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Martin, John G.; Tartabini, Paul V.; Thornblom, Mark N.
2005-01-01
This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X-43A/Hyper-X high speed research vehicle, and its implementation for the reconstruction and analysis of flight test data. Extended Kalman filtering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the filtering routines. Additionally, smoothing algorithms have been implemented in which the final value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from data obtained during the Mach 10 test flight, which occurred on November 16th 2004.
Liao, Hstau Y.; Hashem, Yaser; Frank, Joachim
2015-01-01
Summary Single-particle cryogenic electron microscopy (cryo-EM) is a powerful tool for the study of macromolecular structures at high resolution. Classification allows multiple structural states to be extracted and reconstructed from the same sample. One classification approach is via the covariance matrix, which captures the correlation between every pair of voxels. Earlier approaches employ computing-intensive resampling and estimate only the eigenvectors of the matrix, which are then used in a separate fast classification step. We propose an iterative scheme to explicitly estimate the covariance matrix in its entirety. In our approach, the flexibility in choosing the solution domain allows us to examine a part of the molecule in greater detail. 3D covariance maps obtained in this way from experimental data (cryo-EM images of the eukaryotic pre-initiation complex) prove to be in excellent agreement with conclusions derived by using traditional approaches, revealing in addition the interdependencies of ligand bindings and structural changes. PMID:25982529
Liao, Hstau Y; Hashem, Yaser; Frank, Joachim
2015-06-02
Single-particle cryogenic electron microscopy (cryo-EM) is a powerful tool for the study of macromolecular structures at high resolution. Classification allows multiple structural states to be extracted and reconstructed from the same sample. One classification approach is via the covariance matrix, which captures the correlation between every pair of voxels. Earlier approaches employ computing-intensive resampling and estimate only the eigenvectors of the matrix, which are then used in a separate fast classification step. We propose an iterative scheme to explicitly estimate the covariance matrix in its entirety. In our approach, the flexibility in choosing the solution domain allows us to examine a part of the molecule in greater detail. Three-dimensional covariance maps obtained in this way from experimental data (cryo-EM images of the eukaryotic pre-initiation complex) prove to be in excellent agreement with conclusions derived by using traditional approaches, revealing in addition the interdependencies of ligand bindings and structural changes. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lepping, R. P.; Chao, J. K.
1976-01-01
An estimated shape is presented for the surface of the flare-associated interplanetary shock of February 15-16, 1967, as seen in the ecliptic-plane cross section. The estimate is based on observations by Explorer 33 and Pioneers 6 and 7. The estimated shock normal at the Explorer 33 position is obtained by a least-squares shock parameter-fitting procedure for that satellite's data; the shock normal at the Pioneer 7 position is found by using the magnetic coplanarity theorem and magnetic-field data. The average shock speed from the sun to each spacecraft is determined along with the local speed at Explorer 33 and the relations between these speeds and the position of the initiating solar flare. The Explorer 33 shock normal is found to be severely inclined and not typical of interplanetary shocks. It is shown that the curvature of the shock surface in the ecliptic plane near the earth-Pioneer 7 region is consistent with a radius of not more than 0.4 AU.
Estimating the thickness of diffusive solid electrolyte interface
NASA Astrophysics Data System (ADS)
Wang, XiaoHe; Shen, WenHao; Huang, XianFu; Zang, JinLiang; Zhao, YaPu
2017-06-01
The solid electrolyte interface (SEI) is a hierarchical structure formed in the transition zone between the electrode and the electrolyte. The properties of lithium-ion (Li-ion) battery, such as cycle life, irreversible capacity loss, self-discharge rate, electrode corrosion and safety are usually ascribed to the quality of the SEI, which are highly dependent on the thickness. Thus, understanding the formation mechanism and the SEI thickness is of prime interest. First, we apply dimensional analysis to obtain an explicit relation between the thickness and the number density in this study. Then the SEI thickness in the initial charge-discharge cycle is analyzed and estimated for the first time using the Cahn-Hilliard phase-field model. In addition, the SEI thickness by molecular dynamics simulation validates the theoretical results. It has been shown that the established model and the simulation in this paper estimate the SEI thickness concisely within order-of-magnitude of nanometers. Our results may help in evaluating the performance of SEI and assist the future design of Li-ion battery.
NASA Technical Reports Server (NTRS)
Badhwar, G. D.
1984-01-01
The techniques used initially for the identification of cultivated crops from Landsat imagery depended greatly on the iterpretation of film products by a human analyst. This approach was not very effective and objective. Since 1978, new methods for crop identification are being developed. Badhwar et al. (1982) showed that multitemporal-multispectral data could be reduced to a simple feature space of alpha and beta and that these features would separate corn and soybean very well. However, there are disadvantages related to the use of alpha and beta parameters. The present investigation is concerned with a suitable method for extracting the required features. Attention is given to a profile model for crop discrimination, corn-soybean separation using profile parameters, and an automatic labeling (target recognition) method. The developed technique is extended to obtain a procedure which makes it possible to estimate the crop proportion of corn and soybean from Landsat data early in the growing season.
Simulations in site error estimation for direction finders
NASA Astrophysics Data System (ADS)
López, Raúl E.; Passi, Ranjit M.
1991-08-01
The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.
Estimation of Length-Scales in Soils by MRI
NASA Technical Reports Server (NTRS)
Daidzic, N. E.; Altobelli, S.; Alexander, J. I. D.
2004-01-01
Soil can be best described as an unconsolidated granular media that forms porous structure. The present macroscopic theory of water transport in porous media rests upon the continuum hypothesis that the physical properties of porous media can be associated with continuous, twice-differentiable field variables whose spatial domain is a set of centroids of Representative Elementary Volume (REV) elements. MRI is an ideal technique to estimate various length-scales in porous media. A 0.267 T permanent magnet at NASA GRC was used for this study. A 2D or 3D spatially-resolved porosity distribution were obtained from the NMR signal strength from each voxel and the spin-lattice relaxation time. A classical spin-warp imaging with Multiple Spin Echos (MSE) was used to evaluate proton density in each voxel. Initial resolution of 256 x 256 was subsequently reduced by averaging neighboring voxels and the porosity convergence was observed. A number of engineered "space candidate" soils such as Isolite(trademark), Zeoponics(trademark), Turface(trademark), and Profile(trademark) were used. Glass beads in the size range between 50 microns to 2 mm were used as well. Initial results with saturated porous samples have shown a good estimate of the average porosity consistent with the gravimetric porosity measurement results. For Profile(trademark) samples with particle sizes ranging between 0.25 to 1 mm and characteristic interparticle pore size of 100 microns the characteristic Darcy scale was estimated to be about delta(sub REV) = 10 mm. Glass beads porosity show clear convergence toward a definite REV which stays constant throughout homogeneous sample. Additional information is included in the original extended abstract.
Cost analysis of breast cancer diagnostic assessment programs.
Honein-AbouHaidar, G N; Hoch, J S; Dobrow, M J; Stuart-McEwan, T; McCready, D R; Gagliardi, A R
2017-10-01
Diagnostic assessment programs (daps) appear to improve the diagnosis of cancer, but evidence of their cost-effectiveness is lacking. Given that no earlier study used secondary financial data to estimate the cost of diagnostic tests in the province of Ontario, we explored how to use secondary financial data to retrieve the cost of key diagnostic test services in daps, and we tested the reliability of that cost-retrieving method with hospital-reported costs in preparation for future cost-effectiveness studies. We powered our sample at an alpha of 0.05, a power of 80%, and a margin of error of ±5%, and randomly selected a sample of eligible patients referred to a dap for suspected breast cancer during 1 January-31 December 2012. Confirmatory diagnostic tests received by each patient were identified in medical records. Canadian Classification of Health Intervention procedure codes were used to search the secondary financial data Web portal at the Ontario Case Costing Initiative for an estimate of the direct, indirect, and total costs of each test. The hospital-reported cost of each test received was obtained from the host-hospital's finance department. Descriptive statistics were used to calculate the cost of individual or group confirmatory diagnostic tests, and the Wilcoxon signed-rank test or the paired t-test was used to compare the Ontario Case Costing Initiative and hospital-reported costs. For the 191 identified patients with suspected breast cancer, the estimated total cost of $72,195.50 was not significantly different from the hospital-reported total cost of $72,035.52 ( p = 0.24). Costs differed significantly when multiple tests to confirm the diagnosis were completed during one patient visit and when confirmatory tests reported in hospital data and in medical records were discrepant. The additional estimated cost for non-salaried physicians delivering diagnostic services was $28,387.50. It was feasible to use secondary financial data to retrieve the cost of key diagnostic tests in a breast cancer dap and to compare the reliability of the costs obtained by that estimation method with hospital-reported costs. We identified the strengths and challenges of each approach. Lessons learned from this study have to be taken into consideration in future cost-effectiveness studies.
Foreman, David M
2016-08-05
The impact of policy and funding on Child and Adolescent Mental Health Service (CAMHS) activity and capacity, from 2003 to 2012, was assessed. The focus was on preschool children (aged 0-4 years), as current and 2003 policy initiatives stressed the importance of 'early intervention'. National service capacity from English CAMHS mapping was obtained from 2003 to 2008 inclusive. English Hospital Episode Statistics (HES) for English CAMHS was obtained from 2003 to 2012. The Child and Adolescent Faculty of the Royal College of Psychiatrists surveyed its members about comparative 0-4-year service activity and attitudes in 2012. CAMHS services in England provided HES and CAMHS mapping data. The Child and Adolescent Faculty of the Royal College of Psychiatrists are child psychiatrists, including trainees. CAMHS mapping data provided national estimates of total numbers of CAMHS patients, whereas HES data counted appointments or episodes of inpatient care. The survey reported on Child Psychiatrists' informal estimates of service activity and attitudes towards children aged 0-4 years. The association between service capacity and service activity was moderated by an interaction between specified funding and age, the youngest children benefiting least from specified funding and suffering most when it was withdrawn (Pr=0.005). Policy review and significant differences between age-specific HES trends (Pr<0.001) suggested this reflected prioritisation of older children. Clinicians were unaware of this effect at local level, though it significantly influenced their attitudes to prioritising this group (Pr=0.02). If the new policy initiative for CAMHS is to succeed, it will need to have time-limited priorities attached to sustained, specified funding, with planning for limits as well as expansion. Data collection for policy evaluation should include measures of capacity and activity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Journal: A Review of Some Tracer-Test Design Equations for ...
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-
3-D Spontaneous Rupture Simulations of the 2016 Kumamoto, Japan, Earthquake
NASA Astrophysics Data System (ADS)
Urata, Yumi; Yoshida, Keisuke; Fukuyama, Eiichi
2017-04-01
We investigated the M7.3 Kumamoto, Japan, earthquake to illuminate why and how the rupture of the main shock propagated successfully by 3-D dynamic rupture simulations, assuming a complicated fault geometry estimated based on the distributions of aftershocks. The M7.3 main shock occurred along the Futagawa and Hinagu faults. A few days before, three M6-class foreshocks occurred. Their hypocenters were located along by the Hinagu and Futagawa faults and their focal mechanisms were similar to those of the main shock; therefore, an extensive stress shadow can have been generated on the fault plane of the main shock. First, we estimated the geometry of the fault planes of the three foreshocks as well as that of the main shock based on the temporal evolution of relocated aftershock hypocenters. Then, we evaluated static stress changes on the main shock fault plane due to the occurrence of the three foreshocks assuming elliptical cracks with constant stress drops on the estimated fault planes. The obtained static stress change distribution indicated that the hypocenter of the main shock is located on the region with positive Coulomb failure stress change (ΔCFS) while ΔCFS in the shallow region above the hypocenter was negative. Therefore, these foreshocks could encourage the initiation of the main shock rupture and could hinder the rupture propagating toward the shallow region. Finally, we conducted 3-D dynamic rupture simulations of the main shock using the initial stress distribution, which was the sum of the static stress changes by these foreshocks and the regional stress field. Assuming a slip-weakening law with uniform friction parameters, we conducted 3-D dynamic rupture simulations by varying the friction parameters and the values of the principal stresses. We obtained feasible parameter ranges to reproduce the rupture propagation of the main shock consistent with those revealed by seismic waveform analyses. We also demonstrated that the free surface encouraged the slip evolution of the main shock.
Wetting and spreading behaviors of impinging microdroplets on textured surfaces
NASA Astrophysics Data System (ADS)
Kwon, Dae Hee; Lee, Sang Joon; CenterBiofluid and Biomimic Reseach Team
2012-11-01
Textured surfaces having an array of microscale pillars have been receiving large attention because of their potential uses for robust superhydrophobic and superoleophobic surfaces. In many practical applications, the textured surfaces usually accompany impinging small-scale droplets. To better understand the impinging phenomena on the textured surfaces, the wetting and spreading behaviors of water microdroplets are investigated experimentally. Microdroplets with diameter less than 50 μm are ejected from a piezoelectric printhead with varying Weber number. The final wetting state of an impinging droplet can be estimated by comparing the wetting pressures of the droplet and the capillary pressure of the textured surface. The wetting behaviors obtained experimentally are well agreed with the estimated results. In addition, the transition from bouncing to non-bouncing behaviors in the partially penetrated wetting state is observed. This transition implies the possibility of withdrawal of the penetrated liquid from the inter-pillar space. The maximum spreading factors (ratio of the maximum spreading diameter to the initial diameter) of the impinging droplets have close correlation with the texture area fraction of the surfaces. This work was supported by Creative Research Initiatives (Diagnosis of Biofluid Flow Phenomena and Biomimic Research) of MEST/KOSEF.
Stellar Parameters in an Instant with Machine Learning. Application to Kepler LEGACY Targets
NASA Astrophysics Data System (ADS)
Bellinger, Earl P.; Angelou, George C.; Hekker, Saskia; Basu, Sarbani; Ball, Warrick H.; Guggenberger, Elisabet
2017-10-01
With the advent of dedicated photometric space missions, the ability to rapidly process huge catalogues of stars has become paramount. Bellinger and Angelou et al. [1] recently introduced a new method based on machine learning for inferring the stellar parameters of main-sequence stars exhibiting solar-like oscillations. The method makes precise predictions that are consistent with other methods, but with the advantages of being able to explore many more parameters while costing practically no time. Here we apply the method to 52 so-called "LEGACY" main-sequence stars observed by the Kepler space mission. For each star, we present estimates and uncertainties of mass, age, radius, luminosity, core hydrogen abundance, surface helium abundance, surface gravity, initial helium abundance, and initial metallicity as well as estimates of their evolutionary model parameters of mixing length, overshooting coeffcient, and diffusion multiplication factor. We obtain median uncertainties in stellar age, mass, and radius of 14.8%, 3.6%, and 1.7%, respectively. The source code for all analyses and for all figures appearing in this manuscript can be found electronically at
Aggregate and individual replication probability within an explicit model of the research process.
Miller, Jeff; Schwarz, Wolf
2011-09-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.
A Gaussian beam method for ultrasonic non-destructive evaluation modeling
NASA Astrophysics Data System (ADS)
Jacquet, O.; Leymarie, N.; Cassereau, D.
2018-05-01
The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.
NASA Astrophysics Data System (ADS)
Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko
2017-06-01
A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92 ± 0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
Alternative Sodium Recovery Technology—High Hydroxide Leaching: FY10 Status Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahoney, Lenna A.; Neiner, Doinita; Peterson, Reid A.
2011-02-04
Boehmite leaching tests were carried out at NaOH concentrations of 10 M and 12 M, temperatures of 85°C and 60°C, and a range of initial aluminate concentrations. These data, and data obtained during earlier 100°C tests using 1 M and 5 M NaOH, were used to establish the dependence of the boehmite dissolution rate on hydroxide concentration, temperature, and initial aluminate concentration. A semi-empirical kinetic model for boehmite leaching was fitted to the data and used to calculate the NaOH additions required for leaching at different hydroxide concentrations. The optimal NaOH concentration for boehmite leaching at 85°C was estimated, basedmore » on minimizing the amount of Na that had to be added in NaOH to produce a given boehmite conversion.« less
Obesity Prevention in the Nordic Countries.
Stockmarr, Anders; Hejgaard, Tatjana; Matthiessen, Jeppe
2016-06-01
Previous studies have shown that mean BMI and prevalences of overweight/obesity and obesity have increased over the last decades in the Nordic countries, despite highly regulated societies with a focus on obesity prevention. We review recent overweight/obesity and obesity prevention initiatives within four of the five Nordic countries: Sweden, Denmark, Finland, and Iceland. Moreover, we analyze the current situation based on monitoring data on BMI collected in 2011 and 2014, and obtain overall estimates of overweight/obesity and obesity prevalences for the Nordic Region. Data analysis shows that obesity in adults has increased from 2011 to 2014, while no significant changes were found for children. No significant increases were found for mean BMI and overweight/obesity prevalence. Obesity prevention initiatives among the Nordic countries are highly similar although minor differences are present, which is rooted in transnational Nordic cooperation and comparable societal structures.
CH-47F Improved Cargo Helicopter (CH-47F)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
Hierarchies of Models: Toward Understanding Planetary Nebulae
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Hajian, Arsen R.; Clancy, Daniel (Technical Monitor)
2003-01-01
Stars like our sun (initial masses between 0.8 to 8 solar masses) end their lives as swollen red giants surrounded by cool extended atmospheres. The nuclear reactions in their cores create carbon, nitrogen and oxygen, which are transported by convection to the outer envelope of the stellar atmosphere. As the star finally collapses to become a white dwarf, this envelope is expelled from the star to form a planetary nebula (PN) rich in organic molecules. The physics, dynamics, and chemistry of these nebulae are poorly understood and have implications not only for our understanding of the stellar life cycle but also for organic astrochemistry and the creation of prebiotic molecules in interstellar space. We are working toward generating three-dimensional models of planetary nebulae (PNe), which include the size, orientation, shape, expansion rate and mass distribution of the nebula. Such a reconstruction of a PN is a challenging problem for several reasons. First, the data consist of images obtained over time from the Hubble Space Telescope (HST) and spectra obtained from Kitt Peak National Observatory (KPNO) and Cerro Tololo Inter-American Observatory (CTIO). These images are of course taken from a single viewpoint in space, which amounts to a very challenging tomographic reconstruction. Second, the fact that we have two disparate and orthogonal data types requires that we utilize a method that allows these data to be used together to obtain a solution. To address these first two challenges we employ Bayesian model estimation using a parameterized physical model that incorporates much prior information about the known physics of the PN. In our previous works we have found that the forward problem of the comprehensive model is extremely time consuming. To address this challenge, we explore the use of a set of hierarchical models, which allow us to estimate increasingly more detailed sets of model parameters. These hierarchical models of increasing complexity are akin to scientific theories of increasing sophistication, with each new model/theory being a refinement of a previous one by either incorporating additional prior information or by introducing a new set of parameters to model an entirely new phenomenon. We apply these models to both a simulated and a real ellipsoidal PN to initially estimate the position, angular size, and orientation of the nebula as a two-dimensional object and use these estimates to later examine its three-dimensional properties. The efficiency/accuracy tradeoffs of the techniques are studied to determine the advantages and disadvantages of employing a set of hierarchical models over a single comprehensive model.
Hierarchies of Models: Toward Understanding Planetary Nebulae
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Hajian, Arsen R.; Clancy, Daniel (Technical Monitor)
2002-01-01
Stars like our sun (initial masses between 0.8 to 8 solar masses) end their lives as swollen red giants surrounded by cool extended atmospheres. The nuclear reactions in their cores create carbon, nitrogen and oxygen, which are transported by convection to the outer envelope of the stellar atmosphere. As the star finally collapses to become a white dwarf, this envelope is expelled from the star to form a planetary nebula (PN) rich in organic molecules. The physics, dynamics, and chemistry of these nebulae are poorly understood and have implications not only for our understanding of the stellar life cycle but also for organic astrochemistry and the creation of prebiotic molecules in interstellar space. We are working toward generating three-dimensional models of planetary nebulae (PNe), which include the size, orientation, shape, expansion rate and mass distribution of the nebula. Such a reconstruction of a PN is a challenging problem for several reasons. First, the data consist of images obtained over time from the Hubble Space Telescope (HST) and spectra obtained from Kitt Peak National Observatory (KPNO) and Cerro Tololo Inter-American Observatory (CTIO). These images are of course taken from a single viewpoint in space, which amounts to a very challenging tomographic reconstruction. Second, the fact that we have two disparate and orthogonal data types requires that we utilize a method that allows these data to be used together to obtain a solution. To address these first two challenges we employ Bayesian model estimation using a parameterized physical model that incorporates much prior information about the known physics of the PN. In our previous works we have found that the forward problem of the comprehensive model is extremely time consuming. To address this challenge, we explore the use of a set of hierarchical models, which allow us to estimate increasingly more detailed sets of model parameters. These hierarchical models of increasing complexity are akin to scientific theories of increasing sophistication, with each new model/theory being a refinement of a previous one by either incorporating additional prior information or by introducing a new set of parameters to model an entirely new phenomenon. We apply these models to both a simulated and a real ellipsoidal PN to initially estimate the position, angular size, and orientation of the nebula as a two-dimensional object and use these estimates to later examine its three-dimensional properties. The efficiency/accuracy tradeoffs of the techniques are studied to determine the advantages and disadvantages of employing a set of hierarchical models over a single comprehensive model.
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-01-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-08-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kick processes in the merger of two colliding black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aranha, R. F.; Soares, I. Damiao; Tonini, E. V.
2010-11-15
We examine numerically the process of momentum extraction by gravitational waves in the merger of two colliding black holes, in the realm of Robinson-Trautman spacetimes. The initial data have already a common horizon so that the evolution covers the post-merger phase up to the final configuration of the remnant black hole. The analysis of the momentum flux carried out by gravitational waves indicates that two distinct regimes are present in the post-merger phase: (i) an initial accelerated regime, followed by (ii) a deceleration regime in which the deceleration increases rapidly towards a maximum and then decreases to zero, when themore » gravitational wave emission ceases. The analysis is based on the Bondi-Sachs conservation law for the total momentum of the system. We obtain the total kick velocity V{sub k} imparted on the merged black hole during the accelerated regime (i) and the total antikick velocity V{sub ak} during the decelerated regime (ii), by evaluating the impulse of the gravitational wave flux during both regimes. The distributions of both V{sub k} and V{sub ak} as a function of the symmetric mass ratio {eta} satisfy a simple {eta}-scaling law motivated by post-Newtonian analytical estimates. In the {eta}-scaling formula the Newtonian factor is dominant in the decelerated regime, that generates V{sub ak}, contrary to the behavior in the initial accelerated regime. For an initial infalling velocity v/c{approx_equal}0.462 of each individual black hole we obtain a maximum kick V{sub k{approx_equal}}6.4 km/s at {eta}{approx_equal}0.209, and a maximum antikick V{sub ak{approx_equal}}109 km/s at {eta}{approx_equal}0.205. The net antikick velocity (V{sub ak}-V{sub k}) also satisfies a similar {eta}-scaling law with a maximum approximately 102 km/s also at {eta}{approx_equal}0.205, qualitatively consistent with results from numerical relativity simulations, and post-Newtonian evaluations of binary black hole inspirals. For larger values of the initial data parameter v/c substantial larger values of the net antikick velocity are obtained. Based on the several velocity variables obtained, we discuss a possible definition of the center-of-mass motion of the merged system.« less
NASA Astrophysics Data System (ADS)
Palmer, Margarita; Gomis, Damià; Del Mar Flexas, Maria; Jordà, Gabriel; Naveira-Garabato, Alberto; Jullion, Loic; Tsubouchi, Takamasa
2010-05-01
The ESASSI-08 oceanographic cruise carried out in January 2008 was the most significant milestone of the ESASSI project. ESASSI is the Spanish component of the Synoptic Antarctic Shelf-Slope Interactions (SASSI) study, one of the core projects of the International Polar Year. Hydrographical and biochemical (oxygen, CFCs, nutrients, chlorophyll content, alkalinity, pH, DOC) data were obtained along 11 sections in the South Scotia Ridge (SSR) region, between Elephant and South Orkney Islands. One of the aims of the ESASSI project is to determine the northward outflow of cold and ventilated waters from the Weddell Sea into the Scotia Sea. For that purpose, the accurate estimation of mass, heat, salt, and oxygen transport over the Ridge is requested. An initial analysis of transports across the different sections was first obtained from CTD and ADCP data. The following step has been the application of an inverse method, in order to obtain a better estimation of the net flow for the different water masses present in the region. The set of property-conservation equations considered by the inverse model includes mass, heat and salinity fluxes. The "box" is delimited by the sections along the northern flank of the SSR, between Elephant Island and 50°W, the southern flank of the Ridge, between 51.5°W and 50°W, the 50°W meridian and a diagonal line between Elephant Island and 51.5°W, 61.75°S. Results show that the initial calculations of transports suffered of a significant volume imbalance, due to the inherent errors of ship-ADCP data, the complicated topography and the presence of strong tidal currents in some sections. We present the post-inversion property transports across the rim of the box (and their error bars) for the different water masses.
Determination of source process and the tsunami simulation of the 2013 Santa Cruz earthquake
NASA Astrophysics Data System (ADS)
Park, S. C.; Lee, J. W.; Park, E.; Kim, S.
2014-12-01
In order to understand the characteristics of large tsunamigenic earthquakes, we analyzed the earthquake source process of the 2013 Santa Cruz earthquake and simulated the following tsunami. We first estimated the fault length of about 200 km using 3-day aftershock distribution and the source duration of about 110 seconds using the duration of high-frequency energy radiation (Hara, 2007). Moment magnitude was estimated to be 8.0 using the formula of Hara (2007). From the results of 200 km of fault length and 110 seconds of source duration, we used the initial value of rupture velocity as 1.8 km/s for teleseismic waveform inversions. Teleseismic body wave inversion was carried out using the inversion package by Kikuchi and Kanamori (1991). Teleseismic P waveform data from 14 stations were used and band-pass filter of 0.005 ~ 1 Hz was applied. Our best-fit solution indicated that the earthquake occurred on the northwesterly striking (strike = 305) and shallowly dipping (dip = 13) fault plane. Focal depth was determined to be 23 km indicating shallow event. Moment magnitude of 7.8 was obtained showing somewhat smaller than the result obtained above and that of previous study (Lay et al., 2013). Large slip area was seen around the hypocenter. Using the slip distribution obtained by teleseismic waveform inversion, we calculated the surface deformations using formulas of Okada (1985) assuming as the initial change of sea water by tsunami. Then tsunami simulation was carred out using Conell Multi-grid Coupled Tsunami Model (COMCOT) code and 1 min-grid topographic data for water depth from the General Bathymetric Chart of the Ocenas (GEBCO). According to the tsunami simulation, most of tsunami waves propagated to the directions of southwest and northeast which are perpendicular to the fault strike. DART buoy data were used to verify our simulation. In the presentation, we will discuss more details on the results of source process and tsunami simulation and compare them with the previous study.
Aedo, Sócrates; Cavada, Gabriel; Blümel, Juan E; Chedraui, Peter; Fica, Juan; Barriga, Patricio; Brantes, Sergio; Irribarra, Cristina; Vallejo, María; Campodónico, Ítalo
2015-12-01
This study aims to determine time differences (differences in restricted mean survival times [RMSTs]) in the onset of invasive breast cancer, coronary heart disease, stroke, pulmonary embolism, colorectal cancer, and hip fracture between the placebo group and the conjugated equine estrogens 0.625 mg plus medroxyprogesterone acetate 2.5 mg group of the Women's Health Initiative (WHI) trial based on survival curves of the original report and to provide adequate interpretation of the clinical effects of a given intervention. Distribution of survival function was obtained from cumulative hazard plots of the WHI report; Monte Carlo simulation was performed to obtain censored observations for each outcome, in which assumptions of the Cox model were evaluated once corresponding hazard ratios had been estimated. Using estimation methods such as numerical integration, pseudovalues, and flexible parametric modeling, we determined differences in RMSTs for each outcome. Obtained cumulative hazard plots, hazard ratios, and outcome rates from the simulated model did not show differences in relation to the original WHI report. The differences in RMST between placebo and conjugated equine estrogens 0.625 mg plus medroxyprogesterone acetate 2.5 mg (in flexible parametric modeling) were 1.17 days (95% CI, -2.25 to 4.59) for invasive breast cancer, 7.50 days (95% CI, 2.90 to 12.11) for coronary heart disease, 2.75 days (95% CI, -0.84 to 6.34) for stroke, 4.23 days (95% CI, 1.82 to 6.64) for pulmonary embolism, -2.73 days (95% CI, -5.32 to -0.13) for colorectal cancer, and -2.77 days (95% CI, -5.44 to -0.1) for hip fracture. The differences in RMST for the outcomes of the WHI study are too small to establish clinical risks related to hormone therapy use.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
NASA Astrophysics Data System (ADS)
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of M-H sampler. Although it is not a common inversion technique in geophysics, it can be stated that DE algorithm is worth to get more interest for parameter estimations from potential field data in geophysics considering its good accuracy, less computational cost (in the present problem) and the fact that a well-constructed initial guess is not required to reach the global minimum.
A hydro-mechanical framework for early warning of rainfall-induced landslides (Invited)
NASA Astrophysics Data System (ADS)
Godt, J.; Lu, N.; Baum, R. L.
2013-12-01
Landslide early warning requires an estimate of the location, timing, and magnitude of initial movement, and the change in volume and momentum of material as it travels down a slope or channel. In many locations advance assessment of landslide location, volume, and momentum is possible, but prediction of landslide timing entails understanding the evolution of rainfall and soil-water conditions, and consequent effects on slope stability in real time. Existing schemes for landslide prediction generally rely on empirical relations between landslide occurrence and rainfall amount and duration, however, these relations do not account for temporally variable rainfall nor the variably saturated processes that control the hydro-mechanical response of hillside materials to rainfall. Although limited by the resolution and accuracy of rainfall forecasts and now-casts in complex terrain and by the inherent difficulty in adequately characterizing subsurface materials, physics-based models provide a general means to quantitatively link rainfall and landslide occurrence. To obtain quantitative estimates of landslide potential from physics-based models using observed or forecasted rainfall requires explicit consideration of the changes in effective stress that result from changes in soil moisture and pore-water pressures. The physics that control soil-water conditions are transient, nonlinear, hysteretic, and dependent on material composition and history. In order to examine the physical processes that control infiltration and effective stress in variably saturated materials, we present field and laboratory results describing intrinsic relations among soil water and mechanical properties of hillside materials. At the REV (representative elementary volume) scale, the interaction between pore fluids and solid grains can be effectively described by the relation between soil suction, soil water content, hydraulic conductivity, and suction stress. We show that these relations can be obtained independently from outflow, shear strength, and deformation tests for a wide range of earth materials. We then compare laboratory results with measurements of pore pressure and moisture content from landslide-prone settings and demonstrate that laboratory results obtained for hillside materials are representative of field conditions. These fundamental relations provide a basis to combine observed or forecasted rainfall with in-situ measurements of soil water conditions using hydro-mechanical models that simulate transient variably saturated flow and slope stability. We conclude that early warning using an approach in which in-situ observations are used to establish initial conditions for hydro-mechanical models is feasible in areas of high landslide risk where laboratory characterization of materials is practical and accurate rainfall information can be obtained. Analogous to weather and climate forecasting, such models could then be applied in an ensemble fashion to obtain quantitative estimates of landslide probability and error. Application to broader regions likely awaits breakthroughs in the development of remotely sensed proxies of soil properties and subsurface moisture conditions.
Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan
2015-01-01
To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.
Calibration of the ``Simplified Simple Biosphere Model—SSiB'' for the Brazilian Northeast Caatinga
NASA Astrophysics Data System (ADS)
do Amaral Cunha, Ana Paula Martins; dos Santos Alvalá, Regina Célia; Correia, Francis Wagner Silva; Kubota, Paulo Yoshio
2009-03-01
The Brazilian Northeast region is covered largely by vegetation adapted to the arid conditions and with varied physiognomy, called caatinga. It occupies an extension of about 800.000 km2 that corresponds to 70% of the region. In recent decades, considerable progress in understanding the micrometeorological processes has been reached, with results that were incorporated into soil-vegetation-atmosphere transfer schemes (SVATS) to study the momentum, energy, water vapor, carbon cycle and vegetation dynamics changes of different ecosystems. Notwithstanding, the knowledge of the parameters and physical or physiological characteristics of the vegetation and soil of the caatinga region is very scarce. So, the objective of this work was performing a calibration of the parameters of the SSiB model for the Brazilian Northeast Caatinga. Micrometeorological and hydrological data collected from July 2004 to June 2005, obtained in the Agricultural Research Center of the Semi-Arid Tropic (CPATSA), were used. Preceding the calibration process, a sensibility study of the SSiB model was performed in order to find the parameters that are sensible to the exchange processes between the surface and atmosphere. The results showed that the B parameter, soil moisture potential at saturation (ψs), hydraulic conductivity of saturated soil (ks) and the volumetric moisture at saturation (θs) present high variations on turbulent fluxes. With the initial parameters, the SSiB model showed best results for net radiation, and the latent heat (sensible heat) flux was over-estimated (under-estimated) for all simulation periods. Considering the calibrated parameters, better values of latent flux and sensible flux were obtained. The calibrated parameters were also used for a validation of the surface fluxes considering data from July 2005 to September 2005. The results showed that the model generated better estimations of latent heat and sensible heat fluxes, with low root mean square error. With better estimations of the turbulent fluxes, it was possible to obtain a more representative energy partitioning for the caatinga. Therefore, it is expected that from this calibrated SSiB model, coupled to the meteorological models, it will be possible to obtain more realistic climate and weather forecasts for the Brazilian Northeast region.
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
Planck 2015 results. XVII. Constraints on primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Arroja, F.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hamann, J.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Heavens, A.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lacasa, F.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Münchmeyer, M.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Peiris, H. V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Shiraishi, M.; Smith, K.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutter, P.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Troja, A.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
The Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators - separable template-fitting (KSW), binned, and modal - we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒlocalNL = 2.5 ± 5.7, ƒequilNL= -16 ± 70, , and ƒorthoNL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒlocalNL = 0.8 ± 5.0, ƒequilNL= -4 ± 43, and ƒorthoNL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the "look elsewhere" effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be 𝓰localNL = (-0.9 ± 7.7 ) X 104(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Arnaud, M.
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
Planck 2015 results: XVII. Constraints on primordial non-Gaussianity
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-09-20
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
Estudio de la población estelar de varios cúmulos en Carina
NASA Astrophysics Data System (ADS)
Molina-Lera, J. A.; Baume, G. L.; Carraro, G.; Costa, E.
2015-08-01
Based on deep photometric data in the bands, complemented with infrared 2MASS data, we conducted an analysis of the fundamental parameters of six open clusters located in the Carina region. To perform a systematic study we developed a specialized code. In particular, we investigated the behavior of the respective lower main sequences. Our analysis indicated the presence of a significant population of pre-sequence stars in several of the clusters. We therefore obtained estimated values of contraction ages. Furthermore, we have determined the slopes of the initial mass functions of the studied clusters.
Commercial fishery data from three proposed OTEC sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C.J.; Jones, A.T.
1981-06-01
The operation of Ocean Thermal Energy Conversion (OTEC) power plants may affect fish populations in the regions surrounding the plants. As an initial step in estimating the possible impacts of OTEC power plants on local fishery resources at three proposed sites, commercial fishery records were used to identify common commercially-important species and to obtain a general impression of the abundance of those species at the sites. The sites examind are in the waters adjacent to Punta Tuna, Puerto Rico (PROTEC), and in the Islands of Hawaii offshore from Kahe Point, Oahu (O'OTEC) and Keahole Point, Hawaii (HOTEC).
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
Positive ion temperature effect on the plasma-wall transition
NASA Astrophysics Data System (ADS)
Morales Crespo, R.
2018-06-01
This paper analyses the plasma-wall interaction of a plasma in contact with a conducting planar surface when the positive-ion temperature is not negligible compared with the electron one. The electric potential from the plasma to the wall is obtained by the appropriate formulation of the model as an initial-value problem as well as some features useful for experimental applications, such as the positive current-to-voltage characteristics, the saturation current density, the floating potential or an estimation of the sheath thickness. Finally, it is analysed how all these quantities depend on the ionization degree and the positive-ion temperature.
Reef fish communities are spooked by scuba surveys and may take hours to recover
Cheal, Alistair J.; Miller, Ian R.
2018-01-01
Ecological monitoring programs typically aim to detect changes in the abundance of species of conservation concern or which reflect system status. Coral reef fish assemblages are functionally important for reef health and these are most commonly monitored using underwater visual surveys (UVS) by divers. In addition to estimating numbers, most programs also collect estimates of fish lengths to allow calculation of biomass, an important determinant of a fish’s functional impact. However, diver surveys may be biased because fishes may either avoid or are attracted to divers and the process of estimating fish length could result in fish counts that differ from those made without length estimations. Here we investigated whether (1) general diver disturbance and (2) the additional task of estimating fish lengths affected estimates of reef fish abundance and species richness during UVS, and for how long. Initial estimates of abundance and species richness were significantly higher than those made on the same section of reef after diver disturbance. However, there was no evidence that estimating fish lengths at the same time as abundance resulted in counts different from those made when estimating abundance alone. Similarly, there was little consistent bias among observers. Estimates of the time for fish taxa that avoided divers after initial contact to return to initial levels of abundance varied from three to 17 h, with one group of exploited fishes showing initial attraction to divers that declined over the study period. Our finding that many reef fishes may disperse for such long periods after initial contact with divers suggests that monitoring programs should take great care to minimise diver disturbance prior to surveys. PMID:29844998
NASA Astrophysics Data System (ADS)
Pons, A.; David, C.; Fortin, J.; Stanchits, S.; MenéNdez, B.; Mengus, J. M.
2011-03-01
To investigate the effect of compaction bands (CB) on fluid flow, capillary imbibition experiments were performed on Bentheim sandstone specimens (initial porosity ˜22.7%) using an industrial X-ray scanner. We used a three-step procedure combining (1) X-ray imaging of capillary rise in intact Bentheim sandstone, (2) formation of compaction band under triaxial tests, at 185 MPa effective pressure, with acoustic emissions (AE) recording for localization of the induced damage, and (3) again X-ray imaging of capillary rise in the damaged specimens after the unloading. The experiments were performed on intact cylindrical specimens, 5 cm in diameter and 10.5 cm in length, cored in different orientations (parallel or perpendicular to the bedding). Analysis of the images obtained at different stages of the capillary imbibition shows that the presence of CB slows down the imbibition and disturbs the geometry of water flow. In addition, we show that the CB geometry derived from X-ray density maps analysis is well correlated with the AE location obtained during triaxial test. The analysis of the water front kinetics was conducted using a simple theoretical model, which allowed us to confirm that compaction bands act as a barrier for fluid flow, not fully impermeable though. We estimate a contrast of permeability of a factor of ˜3 between the host rock and the compaction bands. This estimation of the permeability inside the compaction band is consistent with estimations done in similar sandstones from field studies but differs by 1 order of magnitude from estimations from previous laboratory measurements.
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.
Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio
We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Shastry, Karthik; Hulbert, Steven; Weiss, Alex
2014-03-01
Auger Photoelectron Coincidence Spectroscopy (APECS), in which the Auger spectra is measured in coincidence with the core level photoelectron, is capable of pulling difficult to observe low energy Auger peaks out of a large background due mostly to inelastically scattered valence band photoelectrons. However the APECS method alone cannot eliminate the background due to valence band VB photoemission processes in which the initial photon energy is shared by 2 or more electrons and one of the electrons is in the energy range of the core level photoemission peak. Here we describe an experimental method for estimating the contributions from these background processes in the case of an Ag N23VV Auger spectra obtained in coincidence with the 4p photoemission peak. A beam of 180eV photons was incident on a Ag sample and a series of coincidence measurements were made with one cylindrical mirror analyzer (CMA) set at a fixed energies between the core and the valence band and the other CMA scanned over a range corresponding to electrons leaving the surface between 0eV and the 70eV. The spectra obtained were then used to obtain an estimate of the background in the APECS spectra due to multi-electron and inelastic VB photoemission processes. NSF, Welch Foundation.
Improvement in thrust force estimation of solenoid valve considering minor hysteresis loop
NASA Astrophysics Data System (ADS)
Yoon, Myung-Hwan; Choi, Yun-Yong; Hong, Jung-Pyo
2017-05-01
Solenoid valve is a very important hydraulic actuator for an automatic transmission in terms of shift quality. The same form of pressure for the clutch and the input current are required for an ideal control. However, the gap between a pressure and a current can occur which brings a delay in a transmission and a decrease in quality. This problem is caused by hysteresis phenomenon. As the ascending or descending magnetic field is applied to the solenoid, different thrust forces are generated. This paper suggests the calculation method of the thrust force considering the hysteresis phenomenon and consequently the accurate force can be obtained. Such hysteresis occurs in ferromagnetic materials, however the hysteresis phenomenon includes a minor hysteresis loop which begins with an initial magnetization curve and is generated by DC biased field density. As the core of the solenoid is ferromagnetic material, an accurate thrust force is obtained by applying the minor hysteresis loop compared to the force calculated by considering only the initial magnetization curve. An analytical background and the detailed explanation of measuring the minor hysteresis loop are presented. Furthermore experimental results and finite element analysis results are compared for the verification.
Development of a solid state laser of Nd:YLF
NASA Astrophysics Data System (ADS)
Doamaralneto, R.
The CW laser action was obtained at room temperature of a Nd:YLF crystal in an astigmatically compensated cavity, pumped by an argon laser. This laser was completely projected, constructed and characterized in our laboratories. It initiates a broader project on laser development that will have several applications like nuclear fusion, industry, medicine, telemetry, etc. Throught the study of the optical properties of the Nd:YLF crystal, laser operation was predicted using a small volume gain medium on the mentioned cavity, pumped by an Ar 514,5 nm laser line. To obtain the laser action at polarizations sigma (1,053 (MU)m) and (PI) (1.047 (MU)m) an active medium was prepared which was a crystalline plate with a convenient crystallographic orientation. The laser characterization is in reasonable agreement with the initial predictions. For a 3.5% output mirror transmission, the oscillation threshold is about 0.15 W incident on the crystal, depending upon the sample used. For 1 W of incident pump light, the output power is estimated to be 12 mw, which corresponds to almost 1.5% slope efficiency. The versatile arrangement is applicable to almost all optically pumped solid state laser materials.
String-theoretic breakdown of effective field theory near black hole horizons
NASA Astrophysics Data System (ADS)
Dodelson, Matthew; Silverstein, Eva
2017-09-01
We investigate the validity of the equivalence principle near horizons in string theory, analyzing the breakdown of effective field theory caused by longitudinal string spreading effects. An experiment is set up where a detector is thrown into a black hole a long time after an early infalling string. Light cone gauge calculations, taken at face value, indicate a detectable level of root-mean-square longitudinal spreading of the initial string as measured by the late infaller. This results from the large relative boost between the string and detector in the near-horizon region, which develops automatically despite their modest initial energies outside the black hole and the weak curvature in the geometry. We subject this scenario to basic consistency checks, using these to obtain a relatively conservative criterion for its detectability. In a companion paper, we exhibit longitudinal nonlocality in well-defined gauge-invariant S-matrix calculations, obtaining results consistent with the predicted spreading albeit not in a direct analog of the black hole process. We discuss applications of this effect to the firewall paradox, and estimate the time and distance scales it predicts for new physics near black hole and cosmological horizons.
NASA Astrophysics Data System (ADS)
Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen
2018-02-01
Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.
Optical Studies of Orbital Debris at GEO Using Two Telescopes
NASA Technical Reports Server (NTRS)
Seitzer, P.; Abercromby, K. J.; Rodriquez,H. M.; Barker, E.
2008-01-01
Beginning in March, 2007, optical observations of debris at geosynchronous orbit (GEO) were commenced using two telescopes simultaneously at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. The University of Michigan's 0.6/0.9-m Schmidt telescope MODEST (for Michigan Orbital DEbris Survey Telescope) was used in survey mode to find objects that potentially could be at GEO. Because GEO objects only appear in this telescope's field of view for an average of 5 minutes, a full six-parameter orbit can not be determined. Interrupting the survey for follow-up observations leads to incompleteness in the survey results. Instead, as objects are detected on MODEST, initial predictions assuming a circular orbit are done for where the object will be for the next hour, and the objects are reacquired as quickly as possible on the CTIO 0.9-m telescope. This second telescope then follows-up during the first night and, if possible, over several more nights to obtain the maximum time arc possible, and the best six parameter orbit. Our goal is to obtain an initial orbit for all detected objects fainter than R = 15th in order to estimate the orbital distribution of objects selected on the basis of two observational criteria: magnitude and angular rate. Objects fainter than 15th are largely uncataloged and have a completely different angular rate distribution than brighter objects. Combining the information obtained for both faint and bright objects yields a more complete picture of the debris environment rather than just concentrating on the faint debris. One objective is to estimate what fraction of objects selected on the basis of angular rate are not at GEO. A second objective is to obtain magnitudes and colors in standard astronomical filters (BVRI) for comparison with reflectance spectra of likely spacecraft materials. This paper reports on results from two 14 night runs with both telescopes: in March and November 2007: (1) A significant fraction of objects fainter than R = 15th have eccentric orbits (e > 0.1) (2) Virtually all objects selected on the basis of angular rate are in the GEO and GTO regimes. (3) Calibrated magnitudes and colors in BVRI were obtained for many objects fainter than R = 15th magnitude. This work is supported by NASA's Orbital Debris Program Office, Johnson Space Center, Houston, Texas, USA.
Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
Adjoint tomography and centroid-moment tensor inversion of the Kanto region, Japan
NASA Astrophysics Data System (ADS)
Miyoshi, T.
2017-12-01
A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography based on large computing. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. The synthetic displacements were calculated using the spectral element method (SEM; e.g. Komatitsch and Tromp 1999; Peter et al. 2011) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. The proposed model reveals several anomalous areas with extremely low Vs values in comparison with those of the initial model. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes show better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. In the present study, all centroid times of the source solutions were determined using time shifts based on cross correlation to prevent high computing resources before the structural inversion. Additionally, parameters of centroid-moment solutions were fully determined using the SEM assuming the 3D structure (e.g. Liu et al. 2004). As a preliminary result, new solutions were basically same as their initial solutions. This may indicate that the 3D structure is not effective for the source estimation. Acknowledgements: This study was supported by JSPS KAKENHI Grant Number 16K21699.
Rambo, Philip L; Callahan, Jennifer L; Hogan, Lindsey R; Hullmann, Stephanie; Wrape, Elizabeth
2015-01-01
Recent efforts have contributed to significant advances in the detection of malingered performances in adults during cognitive assessment. However, children's ability to purposefully underperform has received relatively little attention. The purpose of the present investigation was to examine children's performances on common intellectual measures, as well as two symptom validity measures: the Test of Memory Malingering and the Dot-Counting Test. This was accomplished through the administration of measures to children ages 6 to 12 years old in randomly assigned full-effort (control) and poor-effort (treatment) conditions. Prior to randomization, children's general intellectual functioning (i.e., IQ) was estimated via administration of the Kaufman Brief Intellectual Battery-Second Edition (KBIT-2). Multivariate analyses revealed that the conditions significantly differed on some but not all administered measures. Specifically, children's estimated IQ in the treatment condition significantly differed from the full-effort IQ initially obtained from the same children on the KBIT-2, as well as from the IQs obtained in the full-effort control condition. These findings suggest that children are fully capable of willfully underperforming during cognitive testing; however, consistent with prior investigations, some measures evidence greater sensitivity than others in evaluating effort.
Abrecht, David G; Schwantes, Jon M
2015-03-03
This paper extends the preliminary linear free energy correlations for radionuclide release performed by Schwantes et al., following the Fukushima-Daiichi Nuclear Power Plant accident. Through evaluations of the molar fractionations of radionuclides deposited in the soil relative to modeled radionuclide inventories, we confirm the initial source of the radionuclides to the environment to be from active reactors rather than the spent fuel pool. Linear correlations of the form In χ = −α ((ΔGrxn°(TC))/(RTC)) + β were obtained between the deposited concentrations, and the reduction potentials of the fission product oxide species using multiple reduction schemes to calculate ΔG°rxn (TC). These models allowed an estimate of the upper bound for the reactor temperatures of TC between 2015 and 2060 K, providing insight into the limiting factors to vaporization and release of fission products during the reactor accident. Estimates of the release of medium-lived fission products 90Sr, 121mSn, 147Pm, 144Ce, 152Eu, 154Eu, 155Eu, and 151Sm through atmospheric venting during the first month following the accident were obtained, indicating that large quantities of 90Sr and radioactive lanthanides were likely to remain in the damaged reactor cores.
Effect of wear on the burst strength of l-80 steel casing
NASA Astrophysics Data System (ADS)
Irawan, S.; Bharadwaj, A. M.; Temesgen, B.; Karuppanan, S.; Abdullah, M. Z. B.
2015-12-01
Casing wear has recently become one of the areas of research interest in the oil and gas industry especially in extended reach well drilling. The burst strength of a worn out casing is one of the significantly affected mechanical properties and is yet an area where less research is done The most commonly used equations to calculate the resulting burst strength after wear are Barlow, the initial yield burst, the full yield burst and the rupture burst equations. The objective of this study was to estimate casing burst strength after wear through Finite Element Analysis (FEA). It included calculation and comparison of the different theoretical bursts pressures with the simulation results along with effect of different wear shapes on L-80 casing material. The von Misses stress was used in the estimation of the burst pressure. The result obtained shows that the casing burst strength decreases as the wear percentage increases. Moreover, the burst strength value of the casing obtained from the FEA has a higher value compared to the theoretical burst strength values. Casing with crescent shaped wear give the highest burst strength value when simulated under nonlinear analysis.
McCune, Jeannine S.; Baker, K. Scott; Blough, David K.; Gamis, Alan; Bemer, Meagan J.; Kelton-Rehkopf, Megan C.; Winter, Laura; Barrett, Jeffrey S.
2016-01-01
Personalizing intravenous (IV) busulfan doses in children using therapeutic drug monitoring (TDM) is an integral component of hematopoietic cell transplant. The authors sought to characterize initial dosing and TDM of IV busulfan, along with factors associated with busulfan clearance, in 729 children who underwent busulfan TDM from December 2005 to December 2008. The initial IV busulfan dose in children weighing ≤12 kg ranged 4.8-fold, with only 19% prescribed the package insert dose of 1.1 mg/kg. In those children weighing >12 kg, the initial dose ranged 5.4-fold, and 79% were prescribed the package insert dose. The initial busulfan dose achieved the target exposure in only 24.3% of children. A wide range of busulfan exposures were targeted for children with the same disease (eg, 39 target busulfan exposures for the 264 children diagnosed with acute myeloid leukemia). Considerable heterogeneity exists regarding when TDM is conducted and the number of pharmacokinetic samples obtained. Busulfan clearance varied by age and dosing frequency but not by underlying disease. The authors’ group is currently evaluating how using population pharmacokinetics to optimize initial busulfan dose and TDM (eg, limited sampling schedule in conjunction with maximum a posteriori Bayesian estimation) may affect clinical outcomes in children. PMID:23444282
Cost-effectiveness in the contemporary management of critical limb ischemia with tissue loss.
Barshes, Neal R; Chambers, James D; Cohen, Joshua; Belkin, Michael
2012-10-01
The care of patients with critical limb ischemia (CLI) and tissue loss is notoriously challenging and expensive. We evaluated the cost-effectiveness of various management strategies to identify those that would optimize value to patients. A probabilistic Markov model was used to create a detailed simulation of patient-oriented outcomes, including clinical events, wound healing, functional outcomes, and quality-adjusted life-years (QALYs) after various management strategies in a CLI patient cohort during a 10-year period. Direct and indirect cost estimates for these strategies were obtained using transition cost-accounting methodology. Incremental cost-effectiveness ratios (ICERs), in 2009 U.S. dollars per QALYs, were calculated compared with the most conservative management strategy of local wound care with amputation as needed. With an ICER of $47,735/QALY, an initial surgical bypass with subsequent endovascular revision(s) as needed was the most cost-effective alternative to local wound care alone. Endovascular-first management strategies achieved comparable clinical outcomes but at higher cost (ICERs ≥$101,702/QALY); however, endovascular management did become cost-effective when the initial foot wound closure rate was >37% or when procedural costs were decreased by >42%. Primary amputation was dominated (less effectiveness and more costly than wound care alone). Contemporary clinical effectiveness and cost estimates show an initial surgical bypass is the most cost-effective alternative to local wound care alone for CLI with tissue loss and can be supported even in a cost-averse health care environment. Copyright © 2012. Published by Mosby, Inc.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini
2018-07-01
This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.
Cost-effectiveness of a ROPS social marketing campaign.
Sorensen, J A; Jenkins, P; Bayes, B; Clark, S; May, J J
2010-01-01
Tractor rollovers are the most frequent cause of death in the farm community. Rollover protection structures (ROPS) can prevent the injuries and fatalities associated with these events; however, almost half of U.S. farms lack these essential devices. One promising strategy for increasing ROPS use is social marketing. The purpose of this study was to assess the costs associated with the New York ROPS Social Marketing Campaign in relation to the cost of fatalities and injuries averted as a result of the campaign to determine whether cost savings could be demonstrated in the initial years of program implementation. A total of 524 farmers who had retrofitted a tractor through the program were mailed a survey to assess the number of rollovers or close calls that occurred since ROPS installation. Responses were obtained from 382 farmers, two of whom indicated that they had a potential fatality/injury scenario since retrofitting their tractor through the program. The cost savings associated with the intervention was estimated using a decision-tree analysis adapted from Myers and Pana-Cryan with appropriate consumer price index adjustments. The data were compared to the cost of the New York ROPS Social Marketing Campaign to arrive at an associated cost-savings estimate relative to the intervention. This study indicates that a net savings will likely be demonstrated within the third year of the New York ROPS Social Marketing initiative. These data may provide evidence for researchers hoping to generate support from state and private agencies for similar initiatives.
Magnitude and sources of bias in the detection of mixed strain M. tuberculosis infection.
Plazzotta, Giacomo; Cohen, Ted; Colijn, Caroline
2015-03-07
High resolution tests for genetic variation reveal that individuals may simultaneously host more than one distinct strain of Mycobacterium tuberculosis. Previous studies find that this phenomenon, which we will refer to as "mixed infection", may affect the outcomes of treatment for infected individuals and may influence the impact of population-level interventions against tuberculosis. In areas where the incidence of TB is high, mixed infections have been found in nearly 20% of patients; these studies may underestimate the actual prevalence of mixed infection given that tests may not be sufficiently sensitive for detecting minority strains. Specific reasons for failing to detect mixed infections would include low initial numbers of minority strain cells in sputum, stochastic growth in culture and the physical division of initial samples into parts (typically only one of which is genotyped). In this paper, we develop a mathematical framework that models the study designs aimed to detect mixed infections. Using both a deterministic and a stochastic approach, we obtain posterior estimates of the prevalence of mixed infection. We find that the posterior estimate of the prevalence of mixed infection may be substantially higher than the fraction of cases in which it is detected. We characterize this bias in terms of the sensitivity of the genotyping method and the relative growth rates and initial population sizes of the different strains collected in sputum. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Xu, Yi-Hua; Pitot, Henry C
2003-09-01
Single enzyme-altered hepatocytes; altered hepatic foci (AHF); and nodular lesions have been implicated, respectively in the processes of initiation, promotion, and progression in rodent hepatocarcinogenesis. Qualitative and quantitative analyses of such lesions have been utilized both to identify and to determine the potency of initiating, promoting, and progressor agents in rodent liver. Of a number of possible parameters determined in the study of such lesions, estimation of the number of foci or nodules in the liver is very important. The method of Saltykov has been used for estimating the number of AHF in rat liver. However, in practice, the Saltykov calculation has at least two weak points: (a) the size class range is limited to 12, which in many instances is too narrow to cover the range of AHF data obtained; and (b) under some conditions, the Saltykov equation generates negative values in several size classes, an obvious impossibility in the real world. In order to overcome these limitations in the Saltykov calculations, a study of the particle size distribution in a wide-range, polydispersed sphere system was performed. A stereologic method, termed the 25F Association method, was developed from this study. This method offers 25 association factors that are derived from the frequency of different-sized transections obtained from transecting a spherical particle, thus expanding the size class range to be analyzed up to 25, which is sufficiently wide to encompass all rat AHF found in most cases. This method exhibits greater flexibility, which allows adjustments to be made within the calculation process when NA((k,k)), the net number of transections from the same size spheres, was found to be a negative value, which is not possible in real situations. The reliability of the 25F Association method was tested thoroughly by computer simulation in both monodispersed and polydispersed sphere systems. The test results were compared with the original Saltykov method. We found that the 25F Association method yielded a better estimate of the total number of spheres in the three-dimensional tissue sample as well as the detailed size distribution information. Although the 25F Association method was derived from the study of a polydispersed sphere system, it can be used for continuous size distribution sphere systems. Application of this method to the estimation of parameters of preneoplastic foci in rodent liver is presented as an example of its utility. An application software program, 3D_estimation.exe, which uses the 25F Association method to estimate the number of AHF in rodent liver, has been developed and is now available at the website of this laboratory.
Perales, J; Muñoz, R; Moussatché, H
1986-01-01
Two separated methods were used to purify a fraction from the opossum (Didelphis marsupialis) serum able to protect mice against Bothrops jararaca venom. The first of them included an initial batch DEAE-Cellulose ion-exchange of the serum, followed by another ion-exchange chromatography on a Carboxymethyl Sepharose column. The second method was a column ion-exchange chromatography on DEAE-Sephacel. These techniques allowed to obtain a protein fraction which resulted homogeneous in cellulose acetate and conventional polyacrylamide gel electrophoresis. The obtained protein fraction proved to be a glycoprotein according to the positive staining with periodic acid Schiff. Sodium dodecylsulfate polyacrylamide gel electrophoresis of the B-mercaptoethanol-reduced fraction showed heterogeneity and allowed to estimate molecular weights in the range of 42,000 to 58,000 daltons. The obtained serum fraction could effectively block the lethal effect of B. jararaca venom when jointly injected to laboratory mice by peritoneal route.
Study of activation data of metal samples from LDEF-1 and Spacelab-2
NASA Technical Reports Server (NTRS)
Laird, C. E.
1994-01-01
Gamma-ray spectra obtained from samples flown aboard the Long Duration Exposure Facility have been analyzed to obtain the nuclear species produced in this material by the interaction of this material with protons and neutrons in this material by the interaction of this material with protons and neutrons encountered in its 69 month orbital flight as well as to quantify the specific activity (pCi/kg) of these nuclear species. This quantification requires accurate corrections of efficiency, self-attenuation, and background. Plans have been developed for archival of the spectra in a form readily accessible to the scientific, engineering and technical community engaged in space research and application. Work has been initiated in the process of estimating the flux of activating particles encountered by material at various locations of the spacecraft.
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
Two-Stage Parameter Estimation in Confined Costal Aquifers
NASA Astrophysics Data System (ADS)
Hsu, N.
2003-12-01
Using field observations of tidal level and piezometric head at an observation well, this research develops a two-stage parameter estimation approach for estimating the hydraulic conductivity (T) and storage coefficient (S) of a confined aquifer in a costal area. While the y-axis coincides with the coastline, the x-axis extends from zero to infinity and, therefore, the domain of the aquifer is assumed to be a half plane. Other assumptions include homogeneity, isotropy and constant thickness of the aquifer, and zero initial head distribution. In the first stage, fluctuations of the tidal level and piezometric head at the observation well are collected simultaneously without the influence of pumping. Fourier spectra analysis is used to find the autocorrelation and crosscorrelation of the two sets of observations as well as the phase vs. frequency function. The tidal efficiency and time delay can then be computed. The analytical solution of Ferris (1951) is then used to compute the ratio of T/S. In the second stage, the system is stressed with pumping and observations of the tidal level and piezometric head at the observation well are collected simultaneously. The effect of tide to the observation well without pumping can be computed by the analytical solution of Ferris (1951) based upon the identified ratio of T/S and is deducted from the piezometric head observations to obtain the updated piezometric head. Theis equation coupled with the method of image is then applied to the updated piezometric head to obtain the T and S values. The developed approach is applied to a hypothetical aquifer. The results obtained show convergence of the approach. The robustness of the developed approach is also demonstrated by using noise-corrupted observations.
Nagaraja, Tavarekere N.; Karki, Kishor; Ewing, James R.; Divine, George W.; Fenstermacher, Joseph D.; Patlak, Clifford S.; Knight, Robert A.
2009-01-01
The hypothesis that the arterial input function (AIF) of gadolinium-diethylenetriaminepentaacetic acid (Gd-DTPA) injected by intravenous (iv) bolus and measured by the change in the T1-relaxation rate (ΔR1; R1=1/T1) of superior sagittal sinus blood (AIF-I) approximates the AIF of 14C-labeled Gd-DTPA measured in arterial blood (AIF*) was tested in a rat stroke model (n=13). Contrary to the hypothesis, the initial part of the ΔR1-time curve was underestimated, and the area under the normalized curve for AIF-I was about 15% lower than that for AIF*, the reference AIF. Hypothetical AIF’s for Gd-DTPA (AIF-II) were derived from the AIF* values and averaged to obtain AIF-III. Influx rate constants (Ki) and proton distribution volumes at zero time (Vp+Vo) were estimated with Patlak plots of AIF-I, -II and -III and tissue ΔR1 data. For the regions of interest, the Ki’s estimated with AIF-I were slightly but not significantly higher than those obtained with AIF-II and AIF-III. In contrast, Vp+Vo was significantly higher when calculated with AIF-I. Similar estimates of Ki and Vp+Vo were obtained with AIF-II and AIF-III. In summary, AIF-I underestimated the reference AIF (AIF*); this shortcoming had little effect on the Ki calculated by Patlak plot but produced a significant overestimation of Vp+Vo. PMID:20512853
Updating histological data on crown initiation and crown completion ages in southern Africans.
Reid, Donald J; Guatelli-Steinberg, Debbie
2017-04-01
To update histological data on crown initiation and completion ages in southern Africans. To evaluate implications of these data for studies that: (a) rely on these data to time linear enamel hypoplasias (LEHs), or, (b) use these data for comparison to fossil hominins. Initiation ages were calculated on 67 histological sections from southern Africans, with sample sizes ranging from one to 11 per tooth type. Crown completion ages for southern Africans were calculated in two ways. First, actual derived initiation ages were added to crown formation times for each histological section to obtain direct information on the crown completion ages of individuals. Second, average initiation ages from this study were added to average crown formation times of southern Africans from the Reid and coworkers previous studies that were based on larger samples. For earlier-initiating tooth types (all anterior teeth and first molars), there is little difference in ages of initiation and crown completion between this and previous studies. Differences increase as a function of initiation age, such that the greatest differences between this and previous studies for both initiation and crown completion ages are for the second and third molars. This study documents variation in initiation ages, particularly for later-initiating tooth types. It upholds the use of previously published histological aging charts for LEHs on anterior teeth. However, this study finds that ages of crown initiation and completion in second and third molars for this southern African sample are earlier than previously estimated. These earlier ages reduce differences between modern humans and fossil hominins for these developmental events in second and third molars. © 2017 Wiley Periodicals, Inc.
M-estimator for the 3D symmetric Helmert coordinate transformation
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2018-01-01
The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.
A hydroclimatological approach to predicting regional landslide probability using Landlab
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.
2018-02-01
We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favalli, Andrea; Vo, D.; Grogan, Brandon R.
The purpose of the Next Generation Safeguards Initiative (NGSI)–Spent Fuel (SF) project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. The NGSI–SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: (1) verify the initial enrichment, burnup, and cooling time of facility declaration; (2) detect the diversion or replacement of pins; (3) estimate the plutonium mass [which is also a function of the variables in (1)]; (4) estimate the decay heat; and (5) determine the reactivity of spent fuelmore » assemblies. Since August 2013, a set of measurement campaigns has been conducted at the Central Interim Storage Facility for Spent Nuclear Fuel (Clab), in collaboration with Swedish Nuclear Fuel and Waste Management Company (SKB). One purpose of the measurement campaigns was to acquire passive gamma spectra with high-purity germanium and lanthanum bromide scintillation detectors from Pressurized Water Reactor and Boiling Water Reactor spent fuel assemblies. The absolute 137Cs count rate and the 154Eu/ 137Cs, 134Cs/ 137Cs, 106Ru/ 137Cs, and 144Ce/ 137Cs isotopic ratios were extracted; these values were used to construct corresponding model functions (which describe each measured quantity’s behavior over various combinations of burnup, cooling time, and initial enrichment) and then were used to determine those same quantities in each measured spent fuel assembly. Furthermore, the results obtained in comparison with the operator declared values, as well as the methodology developed, are discussed in detail in the paper.« less
Liu, Frank Xiaoqing; Ghaffari, Arshia; Dhatt, Harman; Kumar, Vijay; Balsera, Cristina; Wallace, Eric; Khairullah, Quresh; Lesher, Beth; Gao, Xin; Henderson, Heather; LaFleur, Paula; Delgado, Edna M.; Alvarez, Melissa M.; Hartley, Janett; McClernon, Marilyn; Walton, Surrey; Guest, Steven
2014-01-01
Abstract Patients presenting late in the course of kidney disease who require urgent initiation of dialysis have traditionally received temporary vascular catheters followed by hemodialysis. Recent changes in Medicare payment policy for dialysis in the USA incentivized the use of peritoneal dialysis (PD). Consequently, the use of more expeditious PD for late-presenting patients (urgent-start PD) has received new attention. Urgent-start PD has been shown to be safe and effective, and offers a mechanism for increasing PD utilization. However, there has been no assessment of the dialysis-related costs over the first 90 days of care. The objective of this study was to characterize the costs associated with urgent-start PD, urgent-start hemodialysis (HD), or a dual approach (urgent-start HD followed by urgent-start PD) over the first 90 days of treatment from a provider perspective. A survey of practitioners from 5 clinics known to use urgent-start PD was conducted to provide inputs for a cost model representing typical patients. Model inputs were obtained from the survey, literature review, and available cost data. Sensitivity analyses were also conducted. The estimated per patient cost over the first 90 days for urgent-start PD was $16,398. Dialysis access represented 15% of total costs, dialysis services 48%, and initial hospitalization 37%. For urgent-start HD, total per patient costs were $19,352, and dialysis access accounted for 27%, dialysis services 42%, and initial hospitalization 31%. The estimated cost for dual patients was $19,400. Urgent-start PD may offer a cost saving approach for the initiation of dialysis in eligible patients requiring an urgent-start to dialysis. PMID:25526471
Favalli, Andrea; Vo, D.; Grogan, Brandon R.; ...
2016-02-26
The purpose of the Next Generation Safeguards Initiative (NGSI)–Spent Fuel (SF) project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. The NGSI–SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: (1) verify the initial enrichment, burnup, and cooling time of facility declaration; (2) detect the diversion or replacement of pins; (3) estimate the plutonium mass [which is also a function of the variables in (1)]; (4) estimate the decay heat; and (5) determine the reactivity of spent fuelmore » assemblies. Since August 2013, a set of measurement campaigns has been conducted at the Central Interim Storage Facility for Spent Nuclear Fuel (Clab), in collaboration with Swedish Nuclear Fuel and Waste Management Company (SKB). One purpose of the measurement campaigns was to acquire passive gamma spectra with high-purity germanium and lanthanum bromide scintillation detectors from Pressurized Water Reactor and Boiling Water Reactor spent fuel assemblies. The absolute 137Cs count rate and the 154Eu/ 137Cs, 134Cs/ 137Cs, 106Ru/ 137Cs, and 144Ce/ 137Cs isotopic ratios were extracted; these values were used to construct corresponding model functions (which describe each measured quantity’s behavior over various combinations of burnup, cooling time, and initial enrichment) and then were used to determine those same quantities in each measured spent fuel assembly. Furthermore, the results obtained in comparison with the operator declared values, as well as the methodology developed, are discussed in detail in the paper.« less
NASA Technical Reports Server (NTRS)
Beaver, J.; Turk, J.; Bringi, V. N.
1995-01-01
An increase in the demand for satellite communications has led to an overcrowding of the current spectrums being used - mainly at C and Ku bands. To alleviate this overcrowding, new technology is being developed to open up the Ka-band for communications use. One of the first experimental communications satellites using this technology is NASA's Advanced Communications Technology Satellite (ACTS). In Sept. 1993, ACTS was deployed into a geostationary orbit near 100 deg W longitude. The ACTS system employs two Ka-band beacons for propagation experiments, one at 20.185 GHz and another at 27.505 GHz. Attenuation due to rain and tropospheric scintillations will adversely affect new technologies proposed for this spectrum. Therefore, before being used commercially, propagation effects at Ka-band must be studied. Colorado State University is one of eight sites across the United States and Canada conducting propagations studies; each site is equipped with the ACTS propagation terminal (APT). With each site located in a different climatic zone, the main objective of the propagation experiment is to obtain monthly and yearly attenuation statistics. Each site also has secondary objectives that are site dependent. At CSU, the CSU-CHILL radar facility is being used to obtain polarimetric radar data along the ACTS propagation path. During the expected two to four year period of the project, it is hoped to study several significant weather events. The S-band radar will be used to obtain Ka-band attenuation estimates and to initialize propagation models that have been developed, to help classify propagation events measured by the APT. Preliminary attenuation estimates for two attenuation events will be shown here - a bright band case that occurred on 13 May 1994 and a convective case that occurred on 20 Jun. 1994. The computations used to obtain Ka-band attenuation estimates from S-band radar data are detailed. Results from the two events are shown.
Estimating rice yield from MODIS-Landsat fusion data in Taiwan
NASA Astrophysics Data System (ADS)
Chen, C. R.; Chen, C. F.; Nguyen, S. T.
2017-12-01
Rice production monitoring with remote sensing is an important activity in Taiwan due to official initiatives. Yield estimation is a challenge in Taiwan because rice fields are small and fragmental. High spatiotemporal satellite data providing phenological information of rice crops is thus required for this monitoring purpose. This research aims to develop data fusion approaches to integrate daily Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat data for rice yield estimation in Taiwan. In this study, the low-resolution MODIS LST and emissivity data are used as reference data sources to obtain the high-resolution LST from Landsat data using the mixed-pixel analysis technique, and the time-series EVI data were derived the fusion of MODIS and Landsat spectral band data using STARFM method. The LST and EVI simulated results showed the close agreement between the LST and EVI obtained by the proposed methods with the reference data. The rice-yield model was established using EVI and LST data based on information of rice crop phenology collected from 371 ground survey sites across the country in 2014. The results achieved from the fusion datasets compared with the reference data indicated the close relationship between the two datasets with the correlation coefficient (R2) of 0.75 and root mean square error (RMSE) of 338.7 kgs, which were more accurate than those using the coarse-resolution MODIS LST data (R2 = 0.71 and RMSE = 623.82 kgs). For the comparison of total production, 64 towns located in the west part of Taiwan were used. The results also confirmed that the model using fusion datasets produced more accurate results (R2 = 0.95 and RMSE = 1,243 tons) than that using the course-resolution MODIS data (R2 = 0.91 and RMSE = 1,749 tons). This study demonstrates the application of MODIS-Landsat fusion data for rice yield estimation at the township level in Taiwan. The results obtained from the methods used in this study could be useful to policymakers; and thus, the methods can be transferable to other regions in the world for rice yield estimation.
Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.
2001-03-08
1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.
Estimating household and community transmission of ocular Chlamydia trachomatis.
Blake, Isobel M; Burton, Matthew J; Bailey, Robin L; Solomon, Anthony W; West, Sheila; Muñoz, Beatriz; Holland, Martin J; Mabey, David C W; Gambhir, Manoj; Basáñez, María-Gloria; Grassly, Nicholas C
2009-01-01
Community-wide administration of antibiotics is one arm of a four-pronged strategy in the global initiative to eliminate blindness due to trachoma. The potential impact of more efficient, targeted treatment of infected households depends on the relative contribution of community and household transmission of infection, which have not previously been estimated. A mathematical model of the household transmission of ocular Chlamydia trachomatis was fit to detailed demographic and prevalence data from four endemic populations in The Gambia and Tanzania. Maximum likelihood estimates of the household and community transmission coefficients were obtained. The estimated household transmission coefficient exceeded both the community transmission coefficient and the rate of clearance of infection by individuals in three of the four populations, allowing persistent transmission of infection within households. In all populations, individuals in larger households contributed more to the incidence of infection than those in smaller households. Transmission of ocular C. trachomatis infection within households is typically very efficient. Failure to treat all infected members of a household during mass administration of antibiotics is likely to result in rapid re-infection of that household, followed by more gradual spread across the community. The feasibility and effectiveness of household targeted strategies should be explored.
Congdon, Peter; Lloyd, Patsy
2011-02-01
To estimate Toxocara infection rates by age, gender and ethnicity for US counties using data from the National Health and Nutrition Examination Survey (NHANES). After initial analysis to account for missing data, a binary regression model is applied to obtain relative risks of Toxocara infection for 20,396 survey subjects. The regression incorporates interplay between demographic attributes (age, ethnicity and gender), family poverty and geographic context (region, metropolitan status). Prevalence estimates for counties are then made, distinguishing between subpopulations in poverty and not in poverty. Even after allowing for elevated infection risk associated with poverty, seropositivity is elevated among Black non-Hispanics and other ethnic groups. There are also distinct effects of region. When regression results are translated into county prevalence estimates, the main influences on variation in county rates are percentages of non-Hispanic Blacks and county poverty. For targeting prevention it is important to assess implications of national survey data for small area prevalence. Using data from NHANES, the study confirms that both individual level risk factors and geographic contextual factors affect chances of Toxocara infection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthikeyan, R.; Tellier, R. L.; Hebert, A.
2006-07-01
The Coolant Void Reactivity (CVR) is an important safety parameter that needs to be estimated at the design stage of a nuclear reactor. It helps to have an a priori knowledge of the behavior of the system during a transient initiated by the loss of coolant. In the present paper, we have attempted to estimate the CVR for a CANDU New Generation (CANDU-NG) lattice, as proposed at an early stage of the Advanced CANDU Reactor (ACR) development. We have attempted to estimate the CVR with development version of the code DRAGON, using the method of characteristics. DRAGON has several advancedmore » self-shielding models incorporated in it, each of them compatible with the method of characteristics. This study will bring to focus the performance of these self-shielding models, especially when there is voiding of such a tight lattice. We have also performed assembly calculations in 2 x 2 pattern for the CANDU-NG fuel, with special emphasis on checkerboard voiding. The results obtained have been validated against Monte Carlo codes MCNP5 and TRIPOLI-4.3. (authors)« less
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Cortical thickness measurement from magnetic resonance images using partial volume estimation
NASA Astrophysics Data System (ADS)
Zuluaga, Maria A.; Acosta, Oscar; Bourgeat, Pierrick; Hernández Hoyos, Marcela; Salvado, Olivier; Ourselin, Sébastien
2008-03-01
Measurement of the cortical thickness from 3D Magnetic Resonance Imaging (MRI) can aid diagnosis and longitudinal studies of a wide range of neurodegenerative diseases. We estimate the cortical thickness using a Laplacian approach whereby equipotentials analogous to layers of tissue are computed. The thickness is then obtained using an Eulerian approach where partial differential equations (PDE) are solved, avoiding the explicit tracing of trajectories along the streamlines gradient. This method has the advantage of being relatively fast and insure unique correspondence points between the inner and outer boundaries of the cortex. The original method is challenged when the thickness of the cortex is of the same order of magnitude as the image resolution since partial volume (PV) effect is not taken into account at the gray matter (GM) boundaries. We propose a novel way to take into account PV which improves substantially accuracy and robustness. We model PV by computing a mixture of pure Gaussian probability distributions and use this estimate to initialize the cortical thickness estimation. On synthetic phantoms experiments, the errors were divided by three while reproducibility was improved when the same patients was scanned three consecutive times.
Effective force control by muscle synergies.
Berger, Denise J; d'Avella, Andrea
2014-01-01
Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4-5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination.
NASA Astrophysics Data System (ADS)
Castelletti, Davide; Demir, Begüm; Bruzzone, Lorenzo
2014-10-01
This paper presents a novel semisupervised learning (SSL) technique defined in the context of ɛ-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.
Nelson, Richard E; Stevens, Vanessa W; Khader, Karim; Jones, Makoto; Samore, Matthew H; Evans, Martin E; Douglas Scott, R; Slayton, Rachel B; Schweizer, Marin L; Perencevich, Eli L; Rubin, Michael A
2016-05-01
In an effort to reduce methicillin-resistant Staphylococcus aureus (MRSA) transmission through universal screening and isolation, the Department of Veterans Affairs (VA) launched the National MRSA Prevention Initiative in October 2007. The objective of this analysis was to quantify the budget impact and cost effectiveness of this initiative. An economic model was developed using published data on MRSA hospital-acquired infection (HAI) rates in the VA from October 2007 to September 2010; estimates of the costs of MRSA HAIs in the VA; and estimates of the intervention costs, including salaries of staff members hired to support the initiative at each VA facility. To estimate the rate of MRSA HAIs that would have occurred if the initiative had not been implemented, two different assumptions were made: no change and a downward temporal trend. Effectiveness was measured in life-years gained. The initiative resulted in an estimated 1,466-2,176 fewer MRSA HAIs. The initiative itself was estimated to cost $207 million during this 3-year period, while the cost savings from prevented MRSA HAIs ranged from $27 million to $75 million. The incremental cost-effectiveness ratios ranged from $28,048 to $56,944/life-years. The overall impact on the VA's budget was $131-$179 million. Wide-scale implementation of a national MRSA surveillance and prevention strategy in VA inpatient settings may have prevented a substantial number of MRSA HAIs. Although the savings associated with prevented infections helped offset some but not all of the cost of the initiative, this model indicated that the initiative would be considered cost effective. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2017-04-01
Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors larger than approximately 1% lead to estimates that are not only less precise but also biased. Moreover, we show that a fit obtained with a grid of models computed at a fixed ΔY/ ΔZ - thus neglecting the current uncertainty in the initial helium content of the system - can provide severely biased age and overshooting estimates. The possibility of independent overshooting efficiencies for the two stars of the system is also explored. Conclusions: The present analysis confirms that to constrain the core overshooting parameter by means of binary systems is a very difficult task that requires an observational precision still rarely achieved and a robust statistical treatment of the error sources.
A simple Lagrangian forecast system with aviation forecast potential
NASA Technical Reports Server (NTRS)
Petersen, R. A.; Homan, J. H.
1983-01-01
A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.
Bozkoyunlu, Gaye; Takaç, Serpil
2014-01-01
Olive mill wastewater (OMW) with total phenol (TP) concentration range of 300-1200 mg/L was treated with alginate-immobilized Rhodotorula glutinis cells in batch system. The effects of pellet properties (diameter, alginate concentration and cell loading (CL)) and operational parameters (initial TP concentration, agitation rate and reusability of pellets) on dephenolization of OMW were studied. Up to 87% dephenolization was obtained after 120 h biodegradations. The utilization number of pellets increased with the addition of calcium ions into the biodegradation medium. The overall effectiveness factors calculated for different conditions showed that diffusional limitations arising from pellet size and pellet composition could be neglected. Mass transfer limitations appeared to be more effective at high substrate concentrations and low agitation rates. The parameters of logistic model for growth kinetics of R. glutinis in OMW were estimated at different initial phenol concentrations of OMW by curve-fitting of experimental data with the model.
Shell Buckling Design Criteria Based on Manufacturing Imperfection Signatures
NASA Technical Reports Server (NTRS)
Hilburger, Mark W.; Nemeth, Michael P.; Starnes, James H., Jr.
2004-01-01
An analysis-based approach .for developing shell-buckling design criteria for laminated-composite cylindrical shells that accurately accounts for the effects of initial geometric imperfections is presented. With this approach, measured initial geometric imperfection data from six graphite-epoxy shells are used to determine a manufacturing-process-specific imperfection signature for these shells. This imperfection signature is then used as input into nonlinear finite-element analyses. The imperfection signature represents a "first-approximation" mean imperfection shape that is suitable for developing preliminary-design data. Comparisons of test data and analytical results obtained by using several different imperfection shapes are presented for selected shells. Overall, the results indicate that the analysis-based approach presented for developing reliable preliminary-design criteria has the potential to provide improved, less conservative buckling-load estimates, and to reduce the weight and cost of developing buckling-resistant shell structures.
Miss-distance indicator for tank main gun systems
NASA Astrophysics Data System (ADS)
Bornstein, Jonathan A.; Hillis, David B.
1994-07-01
The initial development of a passive, automated system to track bullet trajectories near a target to determine the `miss distance,' and the corresponding correction necessary to bring the following round `on target' is discussed. The system consists of a visible wavelength CCD sensor, long focal length optics, and a separate IR sensor to detect the muzzle flash of the firing event; this is coupled to a `PC' based image processing and automatic tracking system designed to follow the projectile trajectory by intelligently comparing frame to frame variation of the projectile tracer image. An error analysis indicates that the device is particularly sensitive to variation of the projectile time of flight to the target, and requires development of algorithms to estimate this value from the 2D images employed by the sensor to monitor the projectile trajectory. Initial results obtained by using a brassboard prototype to track training ammunition are promising.
Robust point matching via vector field consensus.
Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu
2014-04-01
In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantari, F; Wang, J; Li, T
2015-06-15
Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less
Lithospheric structure and deformation of the North American continent
NASA Astrophysics Data System (ADS)
Tesauro, Magdala; Kaban, Mikhail; Cloetingh, Sierd; Mooney, Walter
2013-04-01
We estimate the integrated strength and elastic thickness (Te) of the North American lithosphere based on thermal, density and structural (seismic) models of the crust and upper mantle. The temperature distribution in the lithosphere is estimated considering for the first time the effect of composition as a result of the integrative approach based on a joint analysis of seismic and gravity data. We do this via an iterative adjustment of the model. The upper mantle temperatures are initially estimated from the NA07 tomography model of Bedle and Van der Lee (2009) using mineral physics equations. This thermal model, obtained for a uniform composition, is used to estimate the gravity effect and to remove it from the total mantle gravity anomalies, which are controlled by both temperature and compositional variations. Therefore, we can predict compositional variations from the residual gravity anomalies and use them to correct the initial thermal model. The corrected thermal model is employed again in the gravity calculations. The loop is repeated until convergence is reached. The results demonstrate that the lithospheric mantle is characterized by strong compositional heterogeneity, which is consistent with xenolith data. Seismic data from the USGS database allow to define P-wave velocity and thickness of each crustal layer of the North American geological provinces. The use of these seismic data and of the new compositional and thermal models gives us the chance to estimate lateral variation of rheology of the main lithospheric layers and to evaluate coupling-decoupling conditions at the layers' boundaries. In the North American Cordillera the strength is mainly localized in the crust, which is decoupled from the mantle lithosphere. In the cratons the strength is chiefly controlled by the mantle lithosphere and all the layers are generally coupled. These results contribute to the long debates on applicability of the "crème brulée" or "jelly-sandwich" models for the lithosphere structure. Intraplate earthquakes (USGS database) occur mainly in the weak regions, such as the Appalachians, and in the transition zones from low to high strength surrounding the craton. The obtained 3D strength model is used to compute Te of the North American lithosphere. This parameter is derived from the thermo-rheological model using new equations that consider variations of the Young's Modulus in the lithosphere. It shows large variability within the cratons, ranging from 70 km to >100km, while it drops to <30 km in the young Phanerozoic regions. The new crustal model is also used to compute the lateral pressure gradient (LPG) that can initiate horizontal ductile flow in the crust. In general, the crustal flow is directed away from the orogens towards adjacent weaker areas. The results show that the effects of the channel flow superimposed with the regional tectonic forces might result in additional significant horizontal and vertical movements associated with zones of compression or extension.
NASA Astrophysics Data System (ADS)
Guo, Jinghua; Luo, Yugong; Li, Keqiang; Dai, Yifan
2018-05-01
This paper presents a novel coordinated path following system (PFS) and direct yaw-moment control (DYC) of autonomous electric vehicles via hierarchical control technique. In the high-level control law design, a new fuzzy factor is introduced based on the magnitude of longitudinal velocity of vehicle, a linear time varying (LTV)-based model predictive controller (MPC) is proposed to acquire the wheel steering angle and external yaw moment. Then, a pseudo inverse (PI) low-level control allocation law is designed to realize the tracking of desired external moment torque and management of the redundant tire actuators. Furthermore, the vehicle sideslip angle is estimated by the data fusion of low-cost GPS and INS, which can be obtained by the integral of modified INS signals with GPS signals as initial value. Finally, the effectiveness of the proposed control system is validated by the simulation and experimental tests.
Ultimate limits for quantum magnetometry via time-continuous measurements
NASA Astrophysics Data System (ADS)
Albarelli, Francesco; Rossi, Matteo A. C.; Paris, Matteo G. A.; Genoni, Marco G.
2017-12-01
We address the estimation of the magnetic field B acting on an ensemble of atoms with total spin J subjected to collective transverse noise. By preparing an initial spin coherent state, for any measurement performed after the evolution, the mean-square error of the estimate is known to scale as 1/J, i.e. no quantum enhancement is obtained. Here, we consider the possibility of continuously monitoring the atomic environment, and conclusively show that strategies based on time-continuous non-demolition measurements followed by a final strong measurement may achieve Heisenberg-limited scaling 1/{J}2 and also a monitoring-enhanced scaling in terms of the interrogation time. We also find that time-continuous schemes are robust against detection losses, as we prove that the quantum enhancement can be recovered also for finite measurement efficiency. Finally, we analytically prove the optimality of our strategy.
A method for vibrational assessment of cortical bone
NASA Astrophysics Data System (ADS)
Song, Yan; Gunaratne, Gemunu H.
2006-09-01
Large bones from many anatomical locations of the human skeleton consist of an outer shaft (cortex) surrounding a highly porous internal region (trabecular bone) whose structure is reminiscent of a disordered cubic network. Age related degradation of cortical and trabecular bone takes different forms. Trabecular bone weakens primarily by loss of connectivity of the porous network, and recent studies have shown that vibrational response can be used to obtain reliable estimates for loss of its strength. In contrast, cortical bone degrades via the accumulation of long fractures and changes in the level of mineralization of the bone tissue. In this paper, we model cortical bone by an initially solid specimen with uniform density to which long fractures are introduced; we find that, as in the case of trabecular bone, vibrational assessment provides more reliable estimates of residual strength in cortical bone than is possible using measurements of density or porosity.
Mach Probe Measurements in a Large-Scale Helicon Plasma
NASA Astrophysics Data System (ADS)
Hatch, M. W.; Kelly, R. F.; Fisher, D. M.; Gilmore, M.; Dwyer, R. H.
2017-10-01
A new six-tipped Mach probe, that utilizes a fused-quartz insulator, has been developed and initially tested in the HelCat dual-source plasma device at the University of New Mexico. The new design allows for relatively long duration measurements of parallel and perpendicular flows that suffer less from thermal changes in conductivity and surface build-up seen in previous alumina-insulated designs. Mach probe measurement will be presented in comparison with ongoing laser induced fluorescence (LIF) measurements, previous Mach probe measurements, ExB flow estimates derived from Langmuir probes, and fast-frame CCD camera images, in an effort to better understand previous anomalous ion flow in HelCat. Additionally, Mach probe-LIF comparisons will provide an experimentally obtained Mach probe calibration constant, K, to validate sheath-derived estimates for the weakly magnetized case. Supported by U.S. National Science Foundation Award 1500423.
NASA Astrophysics Data System (ADS)
Choi, J.; Jo, J.
2016-09-01
The optical satellite tracking data obtained by the first Korean optical satellite tracking system, Optical Wide-field patrol - Network (OWL-Net), had been examined for precision orbit determination. During the test observation at Israel site, we have successfully observed a satellite with Laser Retro Reflector (LRR) to calibrate the angle-only metric data. The OWL observation system is using a chopper equipment to get dense observation data in one-shot over 100 points for the low Earth orbit objects. After several corrections, orbit determination process was done with validated metric data. The TLE with the same epoch of the end of the first arc was used for the initial orbital parameter. Orbit Determination Tool Kit (ODTK) was used for an analysis of a performance of orbit estimation using the angle-only measurements. We have been developing batch style orbit estimator.
Application of thermal model for pan evaporation to the hydrology of a defined medium, the sponge
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Artley, J. A. (Principal Investigator)
1981-01-01
A technique is presented which estimates pan evaporation from the commonly observed values of daily maximum and minimum air temperatures. These two variables are transformed to saturation vapor pressure equivalents which are used in a simple linear regression model. The model provides reasonably accurate estimates of pan evaporation rates over a large geographic area. The derived evaporation algorithm is combined with precipitation to obtain a simple moisture variable. A hypothetical medium with a capacity of 8 inches of water is initialized at 4 inches. The medium behaves like a sponge: it absorbs all incident precipitation, with runoff or drainage occurring only after it is saturated. Water is lost from this simple system through evaporation just as from a Class A pan, but at a rate proportional to its degree of saturation. The contents of the sponge is a moisture index calculated from only the maximum and minium temperatures and precipitation.
Permittivity and conductivity parameter estimations using full waveform inversion
NASA Astrophysics Data System (ADS)
Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.
2018-04-01
Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Lost Muon Study for the Muon G-2 Experiment at Fermilab*
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganguly, S.; Crnkovic, J.; Morse, W. M.
The Fermilab Muon g-2 Experiment has a goal of measuring the muon anomalous magnetic moment to a precision of 140 ppb - a fourfold improvement over the 540 ppb precision obtained by the BNL Muon g-2 Experiment. Some muons in the storage ring will interact with material and undergo bremsstrahlung, emitting radiation and loosing energy. These so called lost muons will curl in towards the center of the ring and be lost, but some of them will be detected by the calorimeters. A systematic error will arise if the lost muons have a different average spin phase than the storedmore » muons. Algorithms are being developed to estimate the relative number of lost muons, so as to optimize the stored muon beam. This study presents initial testing of algorithms that can be used to estimate the lost muons by using either double or triple detection coincidences in the calorimeters.« less
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
NASA Astrophysics Data System (ADS)
Pura, John A.; Hamilton, Allison M.; Vargish, Geoffrey A.; Butman, John A.; Linguraru, Marius George
2011-03-01
Accurate ventricle volume estimates could improve the understanding and diagnosis of postoperative communicating hydrocephalus. For this category of patients, associated changes in ventricle volume can be difficult to identify, particularly over short time intervals. We present an automated segmentation algorithm that evaluates ventricle size from serial brain MRI examination. The technique combines serial T1- weighted images to increase SNR and segments the means image to generate a ventricle template. After pre-processing, the segmentation is initiated by a fuzzy c-means clustering algorithm to find the seeds used in a combination of fast marching methods and geodesic active contours. Finally, the ventricle template is propagated onto the serial data via non-linear registration. Serial volume estimates were obtained in an automated robust and accurate manner from difficult data.
Leighton, David A.; Phillips, Steven P.
2003-01-01
Antelope Valley, California, is a topographically closed basin in the western part of the Mojave Desert, about 50 miles northeast of Los Angeles. The Antelope Valley ground-water basin is about 940 square miles and is separated from the northern part of Antelope Valley by faults and low-lying hills. Prior to 1972, ground water provided more than 90 percent of the total water supply in the valley; since 1972, it has provided between 50 and 90 percent. Most ground-water pumping in the valley occurs in the Antelope Valley ground-water basin, which includes the rapidly growing cities of Lancaster and Palmdale. Ground-water-level declines of more than 200 feet in some parts of the ground-water basin have resulted in an increase in pumping lifts, reduced well efficiency, and land subsidence of more than 6 feet in some areas. Future urban growth and limits on the supply of imported water may continue to increase reliance on ground water. To better understand the ground-water flow system and to develop a tool to aid in effectively managing the water resources, a numerical model of ground-water flow and land subsidence in the Antelope Valley ground-water basin was developed using old and new geohydrologic information. The ground-water flow system consists of three aquifers: the upper, middle, and lower aquifers. The aquifers, which were identified on the basis of the hydrologic properties, age, and depth of the unconsolidated deposits, consist of gravel, sand, silt, and clay alluvial deposits and clay and silty clay lacustrine deposits. Prior to ground-water development in the valley, recharge was primarily the infiltration of runoff from the surrounding mountains. Ground water flowed from the recharge areas to discharge areas around the playas where it discharged either from the aquifer system as evapotranspiration or from springs. Partial barriers to horizontal ground-water flow, such as faults, have been identified in the ground-water basin. Water-level declines owing to ground-water development have eliminated the natural sources of discharge, and pumping for agricultural and urban uses have become the primary source of discharge from the ground-water system. Infiltration of return flows from agricultural irrigation has become an important source of recharge to the aquifer system. The ground-water flow model of the basin was discretized horizontally into a grid of 43 rows and 60 columns of square cells 1 mile on a side, and vertically into three layers representing the upper, middle, and lower aquifers. Faults that were thought to act as horizontal-flow barriers were simulated in the model. The model was calibrated to simulate steady-state conditions, represented by 1915 water levels and transient-state conditions during 1915-95 using water-level and subsidence data. Initial estimates of the aquifer-system properties and stresses were obtained from a previously published numerical model of the Antelope Valley ground-water basin; estimates also were obtained from recently collected hydrologic data and from results of simulations of ground-water flow and land subsidence models of the Edwards Air Force Base area. Some of these initial estimates were modified during model calibration. Ground-water pumpage for agriculture was estimated on the basis of irrigated crop acreage and crop consumptive-use data. Pumpage for public supply, which is metered, was compiled and entered into a database used for this study. Estimated annual pumpage peaked at 395,000 acre-feet (acre-ft) in 1952 and then declined because of declining agricultural production. Recharge from irrigation-return flows was estimated to be 30 percent of agricultural pumpage; the irrigation-return flows were simulated as recharge to the regional water table 10 years following application at land surface. The annual quantity of natural recharge initially was based on estimates from previous studies. During model calibration, natural recharge was reduced from the initial
Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan
2017-10-01
Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be obtained. The algorithm provides a systematic and repeatable approach to estimating the antimicrobial exposure throughout the rearing period, independent of rearing site for finisher batches, as a lifetime exposure measurement. Copyright © 2017 Elsevier B.V. All rights reserved.
Rahaghi, Farbod N; Vegas-Sanchez-Ferrero, Gonzalo; Minhas, Jasleen K; Come, Carolyn E; De La Bruere, Isaac; Wells, James M; González, Germán; Bhatt, Surya P; Fenster, Brett E; Diaz, Alejandro A; Kohli, Puja; Ross, James C; Lynch, David A; Dransfield, Mark T; Bowler, Russel P; Ledesma-Carbayo, Maria J; San José Estépar, Raúl; Washko, George R
2017-05-01
Imaging-based assessment of cardiovascular structure and function provides clinically relevant information in smokers. Non-cardiac-gated thoracic computed tomographic (CT) scanning is increasingly leveraged for clinical care and lung cancer screening. We sought to determine if more comprehensive measures of ventricular geometry could be obtained from CT using an atlas-based surface model of the heart. Subcohorts of 24 subjects with cardiac magnetic resonance imaging (MRI) and 262 subjects with echocardiography were identified from COPDGene, a longitudinal observational study of smokers. A surface model of the heart was manually initialized, and then automatically optimized to fit the epicardium for each CT. Estimates of right and left ventricular (RV and LV) volume and free-wall curvature were then calculated and compared to structural and functional metrics obtained from MRI and echocardiograms. CT measures of RV dimension and curvature correlated with similar measures obtained using MRI. RV and LV volume obtained from CT inversely correlated with echocardiogram-based estimates of RV systolic pressure using tricuspid regurgitation jet velocity and LV ejection fraction respectively. Patients with evidence of RV or LV dysfunction on echocardiogram had larger RV and LV dimensions on CT. Logistic regression models based on demographics and ventricular measures from CT had an area under the curve of >0.7 for the prediction of elevated right ventricular systolic pressure and ventricular failure. These data suggest that non-cardiac-gated, non-contrast-enhanced thoracic CT scanning may provide insight into cardiac structure and function in smokers. Copyright © 2017. Published by Elsevier Inc.
Cope, W.G.; Bartsch, M.R.; Hightower, J.E.
2006-01-01
The aim of this study was to document and model the population dynamics of zebra mussels Dreissena polymorpha (Pallas, 1771) in Pool 8 of the Upper Mississippi River (UMR), USA, for five consecutive years (1992-1996) following their initial discovery in September 1991. Artificial substrates (concrete blocks, 0.49 m2 surface area) were deployed on or around the first of May at two sites within each of two habitat types (main channel border and contiguous backwater). Blocks were removed monthly (30 ?? 10 d) from the end of May to the end of October to obtain density and growth information. Some blocks deployed in May 1995 were retrieved in April 1996 to obtain information about overwinter growth and survival. The annual density of zebra mussels in Pool 8 of the UMR increased from 3.5/m2 in 1992 to 14,956/m 2 in 1996. The average May-October growth rate of newly recruited individuals, based on a von Bertalanffy growth model fitted to monthly shell-length composition data, was 0.11 mm/d. Model estimates of the average survival rate varied from 21 to 100% per month. Estimated recruitment varied substantially among months, with highest levels occurring in September-October of 1994 and 1996, and in July of 1995. Recruitment and density in both habitat types increased by two orders of magnitude in 1996. Follow-up studies will be necessary to assess the long-term stability of zebra mussel populations in the UMR; this study provides the critical baseline information needed for those future comparisons. ?? Published by Oxford University Press on behalf of The Malacological Society of London 2006.
Wave-equation migration velocity inversion using passive seismic sources
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
Elastic Velocity Updating through Image-Domain Tomographic Inversion of Passive Seismic Data
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2014-12-01
Seismic monitoring at injection sites (e.g., CO2sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits images of the earthquake source using various imaging conditions based upon the P- and S-wavefield data. We generate image volumes by back propagating data through initial models and then applying a correlation-based imaging condition. We use the P-wavefield autocorrelation, S-wavefield autocorrelation, and P-S wavefield cross-correlation images. Inconsistencies in the images form the residuals, which are used to update the P- and S-wave velocity models through adjoint-state tomography. Because the image volumes are constructed from all trace data, the signal-to-noise in this space is increased when compared to the individual traces. Moreover, it eliminates the need for picking and does not require any estimation of the source location and timing. Initial tests show that with reasonable source distribution and acquisition array, velocity anomalies can be recovered. Future tests will apply this methodology to other scales from laboratory to global.
De Risio, L; Newton, R; Freeman, J; Shea, A
2015-01-01
There is lack of data on idiopathic epilepsy (IE) in the Italian Spinone (IS). To estimate the prevalence of IE in the IS in the United Kingdom (UK) and to investigate predictors of survival and seizure remission. The target population consisted of 3331 IS born between 2000 and 2011 and registered with the UK Kennel Club (KC). The owners of 1192 dogs returned phase I questionnaire. Sixty-three IS had IE. Population survey. The owners of all UK KC-registered IS were invited to complete the phase I questionnaire. Information from the phase I questionnaire and veterinary medical records was used to identify IS with IE and obtain data on treatment and survival. Additional information was obtained from owners of epileptic IS who completed the phase II questionnaire. The prevalence of IE in the IS in the UK was estimated as 5.3% (95% CI, 4.03-6.57%). Survival time was significantly shorter in IS euthanized because of poorly controlled IE compared with epileptic IS that died of unrelated disorders (P = 0.001). Survival was significantly longer in IS with no cluster seizures (CS) (P = 0.040) and in IS in which antiepileptic medication was initiated after the second seizure rather than after ≥3 seizures (P = 0.044). Seizure remission occurred only in 3 IS. The prevalence of IE in IS (5.3%) is higher than in dogs (0.6%) in the UK. Idiopathic epilepsy in IS has a severe phenotype. Antiepileptic medication initiation after the second seizure and aggressive treatment of CS may improve survival. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Heap, Marion; Sinanovic, Edina
2017-01-01
Background The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. Methods The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects’ registry from 2008–2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Results Average Sign Language Interpreter services’ utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008–2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. Conclusions In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population. However, this service requires significant capital investment by government to enable access to healthcare for the Deaf. PMID:29272272
Zulu, Tryphine; Heap, Marion; Sinanovic, Edina
2017-01-01
The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population. However, this service requires significant capital investment by government to enable access to healthcare for the Deaf.
NASA Astrophysics Data System (ADS)
Ning, Jianguo; Wang, Jun; Jiang, Jinquan; Hu, Shanchao; Jiang, Lishuai; Liu, Xuesheng
2018-01-01
A new energy-dissipation method to identify crack initiation and propagation thresholds is introduced. Conventional and cyclic loading-unloading triaxial compression tests and acoustic emission experiments were performed for coal specimens from a 980-m deep mine with different confining pressures of 10, 15, 20, 25, 30, and 35 MPa. Stress-strain relations, acoustic emission patterns, and energy evolution characteristics obtained during the triaxial compression tests were analyzed. The majority of the input energy stored in the coal specimens took the form of elastic strain energy. After the elastic-deformation stage, part of the input energy was consumed by stable crack propagation. However, with an increase in stress levels, unstable crack propagation commenced, and the energy dissipation and coal damage were accelerated. The variation in the pre-peak energy-dissipation ratio was consistent with the coal damage. This new method demonstrates that the crack initiation threshold was proportional to the peak stress ( σ p) for ratios that ranged from 0.4351 to 0.4753 σ p, and the crack damage threshold ranged from 0.8087 to 0.8677 σ p.
Oliviero, T; Verkerk, R; Van Boekel, M A J S; Dekker, M
2014-11-15
Broccoli belongs to the Brassicaceae plant family consisting of widely eaten vegetables containing high concentrations of glucosinolates. Enzymatic hydrolysis of glucosinolates by endogenous myrosinase (MYR) can form isothiocyanates with health-promoting activities. The effect of water content (WC) and temperature on MYR inactivation in broccoli was investigated. Broccoli was freeze dried obtaining batches with WC between 10% and 90% (aw from 0.10 to 0.96). These samples were incubated for various times at different temperatures (40-70°C) and MYR activity was measured. The initial MYR inactivation rates were estimated by the first-order reaction kinetic model. MYR inactivation rate constants were lower in the driest samples (10% WC) at all studied temperatures. Samples with 67% and 90% WC showed initial inactivation rate constants all in the same order of magnitude. Samples with 31% WC showed intermediate initial inactivation rate constants. These results are useful to optimise the conditions of drying processes to produce dried broccoli with optimal MYR retention for human health. Copyright © 2014 Elsevier Ltd. All rights reserved.
Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.
Li, Liang; Wang, Bigong; Wang, Ge
2016-01-01
In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.
Spudich, P.; Guatteri, Mariagiovanna; Otsuki, K.; Minagawa, J.
1998-01-01
Dislocation models of the 1995 Hyogo-ken Nanbu (Kobe) earthquake derived by Yoshida et al. (1996) show substantial changes in direction of slip with time at specific points on the Nojima and Rokko fault systems, as do striations we observed on exposures of the Nojima fault surface on Awaji Island. Spudich (1992) showed that the initial stress, that is, the shear traction on the fault before the earthquake origin time, can be derived at points on the fault where the slip rake rotates with time if slip velocity and stress change are known at these points. From Yoshida's slip model, we calculated dynamic stress changes on the ruptured fault surfaces. To estimate errors, we compared the slip velocities and dynamic stress changes of several published models of the earthquake. The differences between these models had an exponential distribution, not gaussian. We developed a Bayesian method to estimate the probability density function (PDF) of initial stress from the striations and from Yoshida's slip model. Striations near Toshima and Hirabayashi give initial stresses of about 13 and 7 MPa, respectively. We obtained initial stresses of about 7 to 17 MPa at depths of 2 to 10 km on a subset of points on the Nojima and Rokko fault systems. Our initial stresses and coseismic stress changes agree well with postearthquake stresses measured by hydrofracturing in deep boreholes near Hirabayashi and Ogura on Awaji Island. Our results indicate that the Nojima fault slipped at very low shear stress, and fractional stress drop was complete near the surface and about 32% below depths of 2 km. Our results at depth depend on the accuracy of the rake rotations in Yoshida's model, which are probably correct on the Nojima fault but debatable on the Rokko fault. Our results imply that curved or cross-cutting fault striations can be formed in a single earthquake, contradicting a common assumption of structural geology.
Measurement of medullation in wool and mohair using an Optical Fibre Diameter Analyser.
Lupton, C J; Pfeiffer, F A
1998-05-01
We conducted three experiments to evaluate the Optical Fibre Diameter Analyser (OFDA) for estimating medullation (med [M], kemp [K], and total [T] medullated fiber content) in mohair and wool produced by Angora goats and sheep, respectively. Medullation can be a beneficial characteristic in certain types of wool, but it is highly undesirable in mohair and apparel wools. Current techniques for evaluating medullation in animal fibers are laborious, slow, and expensive. The OFDA had been modified by the manufacturer to measure fiber opacity distribution, a characteristic known to be indicative of medullation in white fibers, and was capable of providing such measurements in a very short time. Measurements made on magnified fiber images produced with a projection microscope (PM) were used as a reference for M, K, and T in fiber samples. An initial experiment with 124 mohair samples (T = .10 to 9.10%) seemed to indicate that OFDA estimates of M, K, and T were only poorly correlated with corresponding PM values (r2 = .5409, .1401, and .5576, respectively). However, a second experiment using wool and mohair samples containing a wider range of medullation (T = .58 to 26.54%) revealed that OFDA estimates of M, K, and T for wool were highly correlated with PM measurements (r2 = .9853, .9307, and .9728, respectively). Evidence was also obtained indicating that the low r2 values associated with mohair relationships were likely due to a combination of factors: 1) high variation among the standard PM measurements and 2) the relatively low M, K, and T contents of the mohair samples compared with wool. In a third experiment, greater accuracy was obtained in the PM measurements by evaluating many more individual fibers per sample (10,000). In this case, OFDA estimates of M, K, and T for mohair were highly correlated with corresponding PM measurements (r2 = .8601, .9939, and .9696, respectively). However, the two sets of linear regression equations obtained for wool and mohair were somewhat different, indicating that separate calculations should be used to estimate PM measurements from OFDA data. In conclusion, it was demonstrated that the OFDA instrument is capable of providing relatively fast, accurate, and potentially less expensive estimates of medullated fiber characteristics in mohair and wool.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2012 CFR
2012-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2013 CFR
2013-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2014 CFR
2014-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-02-01
Small exoplanets tend to fall into two categories: the smallest ones are predominantly rocky, like Earth, and the larger ones have a lower-density, more gaseous composition, similar to Neptune. The planet Kepler-454b was initially estimated to fall between these two groups in radius. So what is its composition?Small-Planet DichotomyThough Kepler has detected thousands of planet candidates with radii between 1 and 2.7 Earth radii, we have only obtained precise mass measurements for 12 of these planets.Mass-radius diagram (click for a closer look!) for planets with radius 2.7 Earth radii and well-measured masses. The six smallest planets (and Venus and Earth) fall along a single mass-radius curve of Earth-like composition. The six larger planets (including Kepler-454b) have lower-density compositions. [Gettel et al. 2016]These measurements, however, show an interesting dichotomy: planets with radii less than 1.6 Earth radii have rocky, Earth-like compositions, following a single relation between their mass and radius. Planets between 2 and 2.7 Earth radii, however, have lower densities and dont follow a single mass-radius relation. Their low densities suggest they contain a significant fraction of volatiles, likely in the form of a thick gas envelope of water, hydrogen, and/or helium.The planet Kepler-454b, discovered transiting a Sun-like star, was initially estimated to have a radius of 1.86 Earth radii placing it in between these two categories. A team of astronomers led by Sara Gettel (Harvard-Smithsonian Center for Astrophysics) have since followed up on the initial Kepler detection, hoping to determine the planets composition.Low-Density OutcomeGettel and collaborators obtained 63 observations of the host stars radial velocity with the HARPS-N spectrograph on the Telescopio Nazionale Galileo, and another 36 observations with the HIRES spectrograph at Keck Observatory. These observations allowed them to do several things:Obtain a more accurate radius estimate for Kepler-454b: 2.37 Earth radii.Measure the planets mass: roughly 6.8 Earth masses.Discover surprise! two other, non-transiting companions in the system: Kepler-454c, a planet with a minimum mass of ~4.5 Jupiter masses on a 524-day orbit, and Kepler-454d, a more distant (10-year orbit) brown dwarf or low-mass star.Kepler-454bs newly measured size and mass place it firmly in the category of non-rocky, larger, less dense planets (the authors calculate a density of ~2.76 g/cm3, or roughly half that of Earth). This seems to reinforce the idea that rocky planets dont grow larger than ~1.6 Earth radii, and planets with mass greater than about 6 Earth masses are typically low-density and/or swathed in an envelope of gas.The authors point out that future observing missions like NASA TESS (launching in 2017) will provide more targets that can be followed up to obtain mass measurements, allowing us to determine if this trend in mass and radius holds up in a larger sample.CitationSara Gettel et al 2016 ApJ 816 95. doi:10.3847/0004-637X/816/2/95
NASA Astrophysics Data System (ADS)
Okamoto, Kyosuke; Tsuno, Seiji
2015-10-01
In the earthquake early warning (EEW) system, the epicenter location and magnitude of earthquakes are estimated using the amplitude growth rate of initial P-waves. It has been empirically pointed out that the growth rate becomes smaller as epicentral distance becomes far regardless of the magnitude of earthquakes. So, the epicentral distance can be estimated from the growth rate using this empirical relationship. However, the growth rates calculated from different earthquakes at the same epicentral distance mark considerably different values from each other. Sometimes the growth rates of earthquakes having the same epicentral distance vary by 104 times. Qualitatively, it has been considered that the gap in the growth rates is due to differences in the local heterogeneities that the P-waves propagate through. In this study, we demonstrate theoretically how local heterogeneities in the subsurface disturb the relationship between the growth rate and the epicentral distance. Firstly, we calculate seismic scattered waves in a heterogeneous medium. First-ordered PP, PS, SP, and SS scatterings are considered. The correlation distance of the heterogeneities and fractional fluctuation of elastic parameters control the heterogeneous conditions for the calculation. From the synthesized waves, the growth rate of the initial P-wave is obtained. As a result, we find that a parameter (in this study, correlation distance) controlling heterogeneities plays a key role in the magnitude of the fluctuation of the growth rate. Then, we calculate the regional correlation distances in Japan that can account for the fluctuation of the growth rate of real earthquakes from 1997 to 2011 observed by K-NET and KiK-net. As a result, the spatial distribution of the correlation distance shows locality. So, it is revealed that the growth rates fluctuate according to the locality. When this local fluctuation is taken into account, the accuracy of the estimation of epicentral distances from initial P-waves can improve, which will in turn improve the accuracy of the EEW system.
NASA Astrophysics Data System (ADS)
Caporali, E.; Chiarello, V.; Galeati, G.
2014-12-01
Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful tool to provide more robust estimation of the results obtained by direct statistical methods.
Learning to select useful landmarks.
Greiner, R; Isukapalli, R
1996-01-01
To navigate effectively, an autonomous agent must be able to quickly and accurately determine its current location. Given an initial estimate of its position (perhaps based on dead-reckoning) and an image taken of a known environment, our agent first attempts to locate a set of landmarks (real-world objects at known locations), then uses their angular separation to obtain an improved estimate of its current position. Unfortunately, some landmarks may not be visible, or worse, may be confused with other landmarks, resulting in both time wasted in searching for the undetected landmarks, and in further errors in the agent's estimate of its position. To address these problems, we propose a method that uses previous experiences to learn a selection function that, given the set of landmarks that might be visible, returns the subset that can be used to reliably provide an accurate registration of the agent's position. We use statistical techniques to prove that the learned selection function is, with high probability, effectively at a local optimum in the space of such functions. This paper also presents empirical evidence, using real-world data, that demonstrate the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Esrael, D.; Kacem, M.; Benadda, B.
2017-07-01
We investigate how the simulation of the venting/soil vapour extraction (SVE) process is affected by the mass transfer coefficient, using a model comprising five partial differential equations describing gas flow and mass conservation of phases and including an expression accounting for soil saturation conditions. In doing so, we test five previously reported quations for estimating the non-aqueous phase liquid (NAPL)/gas initial mass transfer coefficient and evaluate an expression that uses a reference NAPL saturation. Four venting/SVE experiments utilizing a sand column are performed with dry and non-saturated sand at low and high flow rates, and the obtained experimental results are subsequently simulated, revealing that hydrodynamic dispersion cannot be neglected in the estimation of the mass transfer coefficient, particularly in the case of low velocities. Among the tested models, only the analytical solution of a convection-dispersion equation and the equation proposed herein are suitable for correctly modelling the experimental results, with the developed model representing the best choice for correctly simulating the experimental results and the tailing part of the extracted gas concentration curve.
Fatigue Life Estimation under Cumulative Cyclic Loading Conditions
NASA Technical Reports Server (NTRS)
Kalluri, Sreeramesh; McGaw, Michael A; Halford, Gary R.
1999-01-01
The cumulative fatigue behavior of a cobalt-base superalloy, Haynes 188 was investigated at 760 C in air. Initially strain-controlled tests were conducted on solid cylindrical gauge section specimens of Haynes 188 under fully-reversed, tensile and compressive mean strain-controlled fatigue tests. Fatigue data from these tests were used to establish the baseline fatigue behavior of the alloy with 1) a total strain range type fatigue life relation and 2) the Smith-Wastson-Topper (SWT) parameter. Subsequently, two load-level multi-block fatigue tests were conducted on similar specimens of Haynes 188 at the same temperature. Fatigue lives of the multi-block tests were estimated with 1) the Linear Damage Rule (LDR) and 2) the nonlinear Damage Curve Approach (DCA) both with and without the consideration of mean stresses generated during the cumulative fatigue tests. Fatigue life predictions by the nonlinear DCA were much closer to the experimentally observed lives than those obtained by the LDR. In the presence of mean stresses, the SWT parameter estimated the fatigue lives more accurately under tensile conditions than under compressive conditions.
Development of the Contact Lens User Experience: CLUE Scales
Wirth, R. J.; Edwards, Michael C.; Henderson, Michael; Henderson, Terri; Olivares, Giovanna; Houts, Carrie R.
2016-01-01
ABSTRACT Purpose The field of optometry has become increasingly interested in patient-reported outcomes, reflecting a common trend occurring across the spectrum of healthcare. This article reviews the development of the Contact Lens User Experience: CLUE system designed to assess patient evaluations of contact lenses. CLUE was built using modern psychometric methods such as factor analysis and item response theory. Methods The qualitative process through which relevant domains were identified is outlined as well as the process of creating initial item banks. Psychometric analyses were conducted on the initial item banks and refinements were made to the domains and items. Following this data-driven refinement phase, a second round of data was collected to further refine the items and obtain final item response theory item parameters estimates. Results Extensive qualitative work identified three key areas patients consider important when describing their experience with contact lenses. Based on item content and psychometric dimensionality assessments, the developing CLUE instruments were ultimately focused around four domains: comfort, vision, handling, and packaging. Item response theory parameters were estimated for the CLUE item banks (377 items), and the resulting scales were found to provide precise and reliable assignment of scores detailing users’ subjective experiences with contact lenses. Conclusions The CLUE family of instruments, as it currently exists, exhibits excellent psychometric properties. PMID:27383257
Kinetics of MDR Transport in Tumor-Initiating Cells
Koshkin, Vasilij; Yang, Burton B.; Krylov, Sergey N.
2013-01-01
Multidrug resistance (MDR) driven by ABC (ATP binding cassette) membrane transporters is one of the major causes of treatment failure in human malignancy. MDR capacity is thought to be unevenly distributed among tumor cells, with higher capacity residing in tumor-initiating cells (TIC) (though opposite finding are occasionally reported). Functional evidence for enhanced MDR of TICs was previously provided using a “side population” assay. This assay estimates MDR capacity by a single parameter - cell’s ability to retain fluorescent MDR substrate, so that cells with high MDR capacity (“side population”) demonstrate low substrate retention. In the present work MDR in TICs was investigated in greater detail using a kinetic approach, which monitors MDR efflux from single cells. Analysis of kinetic traces obtained allowed for the estimation of both the velocity (V max) and affinity (K M) of MDR transport in single cells. In this way it was shown that activation of MDR in TICs occurs in two ways: through the increase of V max in one fraction of cells, and through decrease of K M in another fraction. In addition, kinetic data showed that heterogeneity of MDR parameters in TICs significantly exceeds that of bulk cells. Potential consequences of these findings for chemotherapy are discussed. PMID:24223908
NASA Astrophysics Data System (ADS)
Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani
2014-05-01
Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.
NASA Technical Reports Server (NTRS)
Chovit, A. R.; Lieberman, P.; Freeman, D. E.; Beggs, W. C.; Millavec, W. A.
1980-01-01
Carbon fiber sampling instruments were developed: passive collectors made of sticky bridal veil mesh, and active instruments using a light emitting diode (LED) source. These instruments measured the number or number rate of carbon fibers released from carbon/graphite composite material when the material was burned in a 10.7 m (35 ft) dia JP-4 pool fire for approximately 20 minutes. The instruments were placed in an array suspended from a 305 m by 305 m (1000 ft by 1000 ft) Jacob's Ladder net held vertically aloft by balloons and oriented crosswind approximately 140 meters downwind of the pool fire. Three tests were conducted during which released carbon fiber data were acquired. These data were reduced and analyzed to obtain the characteristics of the released fibers including their spatial and size distributions and estimates of the number and total mass of fibers released. The results of the data analyses showed that 2.5 to 3.5 x 10 to the 8th power single carbon fibers were released during the 20 minute burn of 30 to 50 kg mass of initial, unburned carbon fiber material. The mass released as single carbon fibers was estimated to be between 0.1 and 0.2% of the initial, unburned fiber mass.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
Gillespie, David C; Bowen, Audrey; Foster, Jonathan K
2012-01-01
Comparing current with estimated premorbid performance helps identify acquired cognitive deficits after brain injury. Tests of reading pronunciation, often used to measure premorbid ability, are inappropriate for stroke patients with motor speech problems. The Spot-the-Word Test (STWT), a measure of lexical decision, offers an alternative approach for estimating premorbid capacity in those with speech problems. However, little is known about the STWT's reliability. In the present study, a consecutive sample of right-hemisphere stroke (RHS) patients (n = 56) completed the STWT at 4 and 16 weeks poststroke. A control group, individually matched to the patients for age and initial STWT score, also completed the STWT on two occasions. More than 80% of patients had STWT scores at retest within 2 scaled score points of their initial score, suggesting that the STWT is a reliable measure for most individuals with RHS. However, RHS patients had significantly greater score change than controls. Limits of agreement analysis revealed that approximately 1 in 7 patients obtained abnormally large STWT score improvements at retest. It is concluded that although the STWT is a useful assessment tool for stroke clinicians, this instrument may significantly underestimate premorbid level of ability in approximately 14% of stroke patients.
CONSTRAINTS ON THE PHYSICAL PROPERTIES OF MAIN BELT COMET P/2013 R3 FROM ITS BREAKUP EVENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirabayashi, Masatoshi; Sánchez, Diego Paul; Gabriel, Travis
2014-07-01
Jewitt et al. recently reported that main belt comet P/2013 R3 experienced a breakup, probably due to rotational disruption, with its components separating on mutually hyperbolic orbits. We propose a technique for constraining physical properties of the proto-body, especially the initial spin period and cohesive strength, as a function of the body's estimated size and density. The breakup conditions are developed by combining mutual orbit dynamics of the smaller components and the failure condition of the proto-body. Given a proto-body with a bulk density ranging from 1000 kg m{sup –3} to 1500 kg m{sup –3} (a typical range of the bulk density of C-type asteroids),more » we obtain possible values of the cohesive strength (40-210 Pa) and the initial spin state (0.48-1.9 hr). From this result, we conclude that although the proto-body could have been a rubble pile, it was likely spinning beyond its gravitational binding limit and would have needed cohesive strength to hold itself together. Additional observations of P/2013 R3 will enable stronger constraints on this event, and the present technique will be able to give more precise estimates of its internal structure.« less
Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis
Peng, Zhenyun; Zhang, Yaohui
2014-01-01
Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182
Transfer of aged Pu to cattle grazing on a contaminated environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, R.O.; Engel, D.W.; Smith, D.D.
1988-03-01
Estimates are obtained of the fraction of ingested or inhaled 239+240Pu transferred to blood and tissues of a reproducing herd of beef cattle, individuals of which grazed within fenced enclosures for up to 1064 d under natural conditions with no supplemental feeding at an arid site contaminated 16 y previously with Pu oxide. The estimated (geometric mean (GM)) fraction of Pu transferred from the gastrointestinal tract to blood serum was about 5 x 10(-6) (geometric standard error (GSE) = 1.4) with an approximate upper bound of about 2 x 10(-5). These results are in reasonable agreement with the value ofmore » 1 x 10(-5) recommended for human radiation protection purposes by the International Commission on Radiological Protection (ICRP) for insoluble Pu oxides that are free of very small particles. Also, results from a laboratory study by Stanley (St75), in which large doses of /sup 238/Pu were orally administered daily to dairy cattle for 19 consecutive days, suggest that aged 239+240Pu at this arid grazing site may not be more biologically available to blood serum than fresh 239+240Pu oxide. The estimated fractions of 239+240Pu transferred from blood serum to tissues of adult grazing cattle were: femur (3.2 X 10(-2), 1.8; GM, GSE), vertebra (1.4 X 10(-1), 1.6), liver (2.3 X 10(-1), 2.0), muscle (1.3 X 10(-1), 1.9), female gonads (7.9 X 10(-5), 1.5), and kidney (1.4 X 10(-3), 1.7). The blood-to-tissue fractional transfers for cattle initially exposed in utero were greater than those exposed only as adults by a factor of about 4 for femur (statistically significant) and of about 2 for other tissues (not significant). The estimated (GM) fraction of inhaled Pu initially deposited in the pulmonary lung was 0.34 (GSE = 1.3) for adults and 0.15 (GSE = 1.3) for cattle initially exposed in utero (a statistically significant difference).« less
Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.
Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David
2008-04-01
A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.
NASA Astrophysics Data System (ADS)
Jafari, Meysam; Garrison, Warren M.; Tsuzaki, Kaneaki
2014-02-01
A medium-carbon low-alloy steel was prepared with initial structures of either martensite or bainite. For both initial structures, warm caliber-rolling was conducted at 773 K (500 °C) to obtain ultrafine elongated grain (UFEG) structures with strong <110>//rolling direction (RD) fiber deformation textures. The UFEG structures consisted of spheroidal cementite particles distributed uniformly in a ferrite matrix of a transverse grain size of about 331 and 311 nm in samples with initial martensite and bainite structures, respectively. For both initial structures, the UFEG materials had similar tensile properties, upper shelf energy (145 J), and ductile-to-brittle transition temperatures 98 K (500 °C). Obtaining the martensitic structure requires more rapid cooling than is needed to obtain the bainitic structure and this more rapid cooling promote cracking. As the UFEG structures obtained from initial martensitic and bainitic structures have almost identical properties, but obtaining the bainitic structure does not require a rapid cooling which promotes cracking suggests the use of a bainitic structure in obtaining UFEG structures should be examined further.
Estimating the global incidence of traumatic spinal cord injury.
Fitzharris, M; Cripps, R A; Lee, B B
2014-02-01
Population modelling--forecasting. To estimate the global incidence of traumatic spinal cord injury (TSCI). An initiative of the International Spinal Cord Society (ISCoS) Prevention Committee. Regression techniques were used to derive regional and global estimates of TSCI incidence. Using the findings of 31 published studies, a regression model was fitted using a known number of TSCI cases as the dependent variable and the population at risk as the single independent variable. In the process of deriving TSCI incidence, an alternative TSCI model was specified in an attempt to arrive at an optimal way of estimating the global incidence of TSCI. The global incidence of TSCI was estimated to be 23 cases per 1,000,000 persons in 2007 (179,312 cases per annum). World Health Organization's regional results are provided. Understanding the incidence of TSCI is important for health service planning and for the determination of injury prevention priorities. In the absence of high-quality epidemiological studies of TSCI in each country, the estimation of TSCI obtained through population modelling can be used to overcome known deficits in global spinal cord injury (SCI) data. The incidence of TSCI is context specific, and an alternative regression model demonstrated how TSCI incidence estimates could be improved with additional data. The results highlight the need for data standardisation and comprehensive reporting of national level TSCI data. A step-wise approach from the collation of conventional epidemiological data through to population modelling is suggested.
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
NASA Astrophysics Data System (ADS)
Bobojć, Andrzej; Drożyner, Andrzej; Rzepecka, Zofia
2017-04-01
The work includes the comparison of performance of selected geopotential models in the dynamic orbit estimation of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. This was realized by fitting estimated orbital arcs to the official centimeter-accuracy GOCE kinematic orbit which is provided by the European Space Agency. The Cartesian coordinates of kinematic orbit were treated as observations in the orbit estimation. The initial satellite state vector components were corrected in an iterative process with respect to the J2000.0 inertial reference frame using the given geopotential model, the models describing the remaining gravitational perturbations and the solar radiation pressure. Taking the obtained solutions into account, the RMS values of orbital residuals were computed. These residuals result from the difference between the determined orbit and the reference one - the GOCE kinematic orbit. The performance of selected gravity models was also determined using various orbital arc lengths. Additionally, the RMS fit values were obtained for some gravity models truncated at given degree and order of spherical harmonic coefficients. The advantage of using the kinematic orbit is its independence from any a priori dynamical models. For the research such GOCE-independent gravity models as HUST-Grace2016s, ITU_GRACE16, ITSG-Grace2014s, ITSG-Grace2014k, GGM05S, Tongji-GRACE01, ULUX_CHAMP2013S, ITG-GRACE2010S, EIGEN-51C, EIGEN5S, EGM2008 and EGM96 were adopted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, S.K.; Dixon, T.H.; Freymueller, J.T.
1990-04-01
Geodetic monitoring of subduction of the Nazca and Cocos plates is a goal of the CASA (Central and South America) Global Positioning System (GPS) experiments, and requires measurement of intersite distances (baselines) in excess of 500 km. The major error source in these measurements is the uncertainty in the position of the GPS satellites at the time of observation. A key aspect of the first CASA experiment, CASA Uno, was the initiation of a global network of tracking stations minimize these errors. The authors studied the effect of using various subsets of this global tracking network on long (>100 km)more » baseline estimates in the CASA region. Best results were obtained with a global tracking network consisting of three U.S. fiducial stations, two sites in the southwest pacific and two sites in Europe. Relative to smaller subsets, this global network improved baseline repeatability, resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates. Describing baseline repeatability for horizontal components as {sigma}=(a{sup 2} + b{sup 2}L{sup 2}){sup 1/2} where L is baseline length, the authors obtained a = 4 and 9 mm and b = 2.8{times}10{sup {minus}8} and 2.3{times}10{sup {minus}8} for north and east components, respectively, on CASA baselines up to 1,000 km in length with this global network.« less
Pueyo Bellafont, Noèlia; Bagus, Paul S; Illas, Francesc
2015-06-07
A systematic study of the N(1s) core level binding energies (BE's) in a broad series of molecules is presented employing Hartree-Fock (HF) and the B3LYP, PBE0, and LC-BPBE density functional theory (DFT) based methods with a near HF basis set. The results show that all these methods give reasonably accurate BE's with B3LYP being slightly better than HF but with both PBE0 and LCBPBE being poorer than HF. A rigorous and general decomposition of core level binding energy values into initial and final state contributions to the BE's is proposed that can be used within either HF or DFT methods. The results show that Koopmans' theorem does not hold for the Kohn-Sham eigenvalues. Consequently, Kohn-Sham orbital energies of core orbitals do not provide estimates of the initial state contribution to core level BE's; hence, they cannot be used to decompose initial and final state contributions to BE's. However, when the initial state contribution to DFT BE's is properly defined, the decompositions of initial and final state contributions given by DFT, with several different functionals, are very similar to those obtained with HF. Furthermore, it is shown that the differences of Kohn-Sham orbital energies taken with respect to a common reference do follow the trend of the properly calculated initial state contributions. These conclusions are especially important for condensed phase systems where our results validate the use of band structure calculations to determine initial state contributions to BE shifts.
Human exposure assessment and the National Toxicology Program.
Lucier, G W; Schecter, A
1998-01-01
The National Institute of Environmental Health Sciences/National Toxicology Program (NIEHS/NTP) is developing a new interagency initiative in exposure assessment. This initiative involves the NIEHS, the Centers for Disease Control and Prevention through its National Center for Environmental Health, the National Institute for Occupational Safety and Health, the EPA, and other participating institutes and agencies of the NTP. This initiative will benefit public health and priority setting in a number of ways. First, as discussed above, it will strengthen the scientific foundation for risk assessments by the development of more credible exposure/response relationships in people by improving cross-species extrapolation, the development of biologically based dose-response models, and the identification of sensitive subpopulations and for "margin of exposure" based estimates of risk. Second, it will provide the kind of information necessary for deciding which chemicals should be studied with the limited resources available for toxicological testing. For example, there are 85,000 chemicals in commerce today, and the NTP can only provide toxicological evaluations on 10-20 per year. Third, we would use the information obtained from the exposure initiative to focus our research on mixtures that are actually present in people's bodies. Fourth, we would obtain information on the kinds and amount of chemicals in children and other potentially sensitive subpopulations. Determinations of whether additional safety factors need to be applied to children must rest, in part, upon comparative exposure analyses between children and adults. Fifth, this initiative, taken together with the environmental genome initiative, will provide the science base essential for meaningful studies on gene/environment interactions, particularly for strengthening the evaluation of epidemiology studies. Sixth, efficacy of public health policies aimed at reducing human exposure to chemical agents could be evaluated in a more meaningful way if body burden data were available over time, including remediation around Superfund sites and efforts to achieve environmental justice. The exposure assessment initiative is needed to address public health needs. It is feasible because of recent advances in analytical technology and molecular biology, and it is an example of how different agencies can work together to better fulfill their respective missions. Images Figure 1 Figure 2 PMID:9755136
Estimating the costs of human space exploration
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1994-01-01
The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.
Relationship between wave energy and free energy from pickup ions in the Comet Halley environment
NASA Technical Reports Server (NTRS)
Huddleston, D. E.; Johnstone, A. D.
1992-01-01
The free energy available from the implanted heavy ion population at Comet Halley is calculated by assuming that the initial unstable velocity space ring distribution of the ions evolves toward a bispherical shell. Ultimately this free energy adds to the turbulence in the solar wind. Upstream and downstream free energies are obtained separately for the conditions observed along the Giotto spacecraft trajectory. The results indicate that the waves are mostly upstream propagating in the solar wind frame. The total free energy density always exceeds the measured wave energy density because, as expected in the nonlinear process of ion scattering, the available energy is not all immediately released. An estimate of the amount which has been released can be obtained from the measured oxygen ion distributions and again it exceeds that observed. The theoretical analysis is extended to calculate the k spectrum of the cometary-ion-generated turbulence.
Seasat microwave wind and rain observations in severe tropical and midlatitude marine storms
NASA Technical Reports Server (NTRS)
Black, P. G.; Hawkins, J. D.; Gentry, R. C.; Cardone, V. J.
1985-01-01
Initial results of studies concerning Seasat measurements in and around tropical and severe midlatitude cyclones over the open ocean are presented, together with an assessment of their accuracy and usefulness. Complementary measurements of surface wind speed and direction, rainfall rate, and the sea surface temperature obtained with the Seasat-A Satellite Scatterometer (SASS), the Scanning Multichannel Microwave Radiometer (SMMR), and the Seasat SAR are analyzed. The Seasat data for the Hurrricanes Fico, Ella, and Greta and the QE II storm are compared with data obtained from aircraft, buoys, and ships. It is shown that the SASS-derived wind speeds are accurate to within 10 percent, and the directions are accurate to within 20 percent. In general, the SASS estimates tend to measure light winds too high and intense winds too low. The errors of the SMMR-derived measurements of the winds in hurricanes tend to be higher than those of the SASS-derived measurements.
Application of inertial instruments for DSN antenna pointing and tracking
NASA Technical Reports Server (NTRS)
Eldred, D. B.; Nerheim, N. M.; Holmes, K. G.
1990-01-01
The feasibility of using inertial instruments to determine the pointing attitude of the NASA Deep Space Network antennas is examined. The objective is to obtain 1 mdeg pointing knowledge in both blind pointing and tracking modes to facilitate operation of the Deep Space Network 70 m antennas at 32 GHz. A measurement system employing accelerometers, an inclinometer, and optical gyroscopes is proposed. The initial pointing attitude is established by determining the direction of the local gravity vector using the accelerometers and the inclinometer, and the Earth's spin axis using the gyroscopes. Pointing during long-term tracking is maintained by integrating the gyroscope rates and augmenting these measurements with knowledge of the local gravity vector. A minimum-variance estimator is used to combine measurements to obtain the antenna pointing attitude. A key feature of the algorithm is its ability to recalibrate accelerometer parameters during operation. A survey of available inertial instrument technologies is also given.
Localization of Mobile Robots Using Odometry and an External Vision Sensor
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318
An Approach to the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.
1997-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integral turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the laminar flow toward the desired amount. An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
An approach to the constrained design of natural laminar flow airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford Earl
1995-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integml turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the larninar flow toward the desired amounl An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
Lång, K.; Eriksson Stenström, K.; Rosso, A.; Bech, M.; Zackrisson, S.; Graubau, D.; Mattsson, S.
2016-01-01
The purpose of this study was to perform an initial investigation of the possibility to determine breast cancer growth rate with 14C bomb-pulse dating. Tissues from 11 breast cancers, diagnosed in 1983, were retrieved from a regional biobank. The estimated average age of the majority of the samples overlapped the year of collection (1983) within 3σ. Thus, this first study of tumour tissue has not yet demonstrated that 14C bomb-pulse dating can obtain information on the growth of breast cancer. However, with further refinement, involving extraction of cell types and components, there is a possibility that fundamental knowledge of tumour biology might still be gained by the bomb-pulse technique. Additionally, δ 13C and δ 15N analyses were performed to obtain dietary and metabolic information, and to serve as a base for improvement of the age determination. PMID:27179119
Camera calibration based on the back projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui
2015-12-01
Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.
Localization of mobile robots using odometry and an external vision sensor.
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.
An entangling-probe attack on Shor's algorithm for factorization
NASA Astrophysics Data System (ADS)
Azuma, Hiroo
2018-02-01
We investigate how to attack Shor's quantum algorithm for factorization with an entangling probe. We show that an attacker can steal an exact solution of Shor's algorithm outside an institute where the quantum computer is installed if he replaces its initialized quantum register with entangled qubits, namely the entangling probe. He can apply arbitrary local operations to his own probe. Moreover, we assume that there is an unauthorized person who helps the attacker to commit a crime inside the institute. He tells garbage data obtained from measurements of the quantum register to the attacker secretly behind a legitimate user's back. If the attacker succeeds in cracking Shor's algorithm, the legitimate user obtains a random answer and does not notice the attacker's illegal acts. We discuss how to detect the attacker. Finally, we estimate a probability that the quantum algorithm inevitably makes an error, of which the attacker can take advantage.
Fiber sensor for non-contact estimation of vital bio-signs
NASA Astrophysics Data System (ADS)
Sirkis, Talia; Beiderman, Yevgeny; Agdarov, Sergey; Beiderman, Yafim; Zalevsky, Zeev
2017-05-01
Continuous noninvasive measurement of vital bio-signs, such as cardiopulmonary parameters, is an important tool in evaluation of the patient's physiological condition and health monitoring. On the demand of new enabling technologies, some works have been done in arterial pulse monitoring using optical fiber sensors. In this paper, we introduce a novel device based on single mode in-fibers Mach-Zehnder interferometer (MZI) to detect heartbeat, respiration and pulse wave velocity (PWV). The introduced interferometer is based on a new implanted scheme. It replaces the conventional MZI realized by inserting of discontinuities in the fiber to break the total internal reflection and scatter/collect light. The proposed fiber sensor was successfully incorporated into shirt to produce smart clothing. The measurements obtained from the smart clothing could be obtained in comfortable manner and there is no need to have an initial calibration or a direct contact between the sensor and the skin of the tested individual.
Simulating the Surface Relief of Nanoaerosols Obtained via the Rapid Cooling of Droplets
NASA Astrophysics Data System (ADS)
Tovbin, Yu. K.; Zaitseva, E. S.; Rabinovich, A. B.
2018-03-01
An approach is formulated that theoretically describes the structure of a rough surface of small aerosol particles obtained from a liquid droplet upon its rapid cooling. The problem consists of two stages. In the first stage, a concentration profile of the droplet-vapor transition region is calculated. In the second stage, local fractions of vacant sites and their pairs are found on the basis of this profile, and the rough structure of a frozen droplet surface transitioning to the solid state is calculated. Model parameters are the temperature of the initial droplet and those of the lateral interaction between droplet atoms. Information on vacant sites inside the region of transition allows us to identify adsorption centers and estimate the monolayer capacity, compared to that of the total space of the region of transition. The approach is oriented toward calculating adsorption isotherms on real surfaces.
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Ivanov, Tonislav I.
2011-01-01
To increase safety and land near pre-deployed resources, future NASA missions to the moon will require precision landing. A LIDAR-based terrain relative navigation (TRN) approach can achieve precision landing under any lighting conditions. This paper presents results from processing flash lidar and laser altimeter field test data that show LIDAR TRN can obtain position estimates less than 90m while automatically detecting and eliminating incorrect measurements using internal metrics on terrain relief and data correlation. Sensitivity studies show that the algorithm has no degradation in matching performance with initial position uncertainties up to 1.6 km
Universality in the nonlinear leveling of capillary films
NASA Astrophysics Data System (ADS)
Zheng, Zhong; Fontelos, Marco A.; Shin, Sangwoo; Stone, Howard A.
2018-03-01
Many material science, coating, and manufacturing problems involve liquid films where defects that span the film thickness must be removed. Here, we study the surface-tension-driven leveling dynamics of a thin viscous film following closure of an initial hole. The dynamics of the film shape is described by a nonlinear evolution equation, for which we obtain a self-similar solution. The analytical results are verified using time-dependent numerical and experimental results for the profile shapes and the minimum film thickness at the center. The universal behavior we identify can be useful for characterizing the time evolution of the leveling process and estimating material properties from experiments.
Two approaches to the rapid screening of crystallization conditions
NASA Technical Reports Server (NTRS)
Mcpherson, Alexander
1992-01-01
A screening procedure is described for estimating conditions under which crystallization will proceed, thus providing a starting point for more careful experiments. The initial procedure uses the experimental setup of McPherson (1982) which supports 24 individual hanging drop experiments for screening variables such as the precipitant type, the pH, the temperature, and the effects of certain additives and which uses about 1 mg of protein. A second approach is proposed (which is rather hypothetical at this stage and needs a larger sample), based on the isoelectric focusing of protein samples on concentration gradients of common precipitating agents. Using this approach, crystals of concanavalin B and canavalin were obtained.
Modelling of interaction of the large disrupted meteoroid with the Earth atmosphere
NASA Astrophysics Data System (ADS)
Brykina, Irina G.
2018-05-01
The model of atmospheric fragmentation of large meteoroids to the cloud of fragments is proposed. The comparison with similar models used in the literature is made. The approximate analytical solution of meteor physics equations is obtained for the mass loss of the disrupted meteoroid, the energy deposition and for the light curve normalized to the maximum brightness. This solution is applied to modelling of interaction of the Chelyabinsk meteoroid with the atmosphere. The influence of uncertainty of initial parameters of the meteoroid on characteristics of its interaction with the atmosphere is estimated. Comparison of the analytical solution with the observational data is made.
Global and Local Existence for the Dissipative Critical SQG Equation with Small Oscillations
NASA Astrophysics Data System (ADS)
Lazar, Omar
2015-09-01
This article is devoted to the study of the critical dissipative surface quasi-geostrophic ( SQG) equation in . For any initial data belonging to the space , we show that the critical (SQG) equation has at least one global weak solution in time for all 1/4 ≤ s ≤ 1/2 and at least one local weak solution in time for all 0 < s < 1/4. The proof for the global existence is based on a new energy inequality which improves the one obtain in Lazar (Commun Math Phys 322:73-93, 2013) whereas the local existence uses more refined energy estimates based on Besov space techniques.