Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
An Integrated Approach to Indoor and Outdoor Localization
2017-04-17
localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to
NASA Technical Reports Server (NTRS)
Engelland, Shawn A.; Capps, Alan
2011-01-01
Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.
NASA Astrophysics Data System (ADS)
Kompany-Zareh, Mohsen; Khoshkam, Maryam
2013-02-01
This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
NASA Astrophysics Data System (ADS)
Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin
2011-12-01
Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.
An Optimization-Based State Estimatioin Framework for Large-Scale Natural Gas Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Zavala, Victor M.
We propose an optimization-based state estimation framework to track internal spacetime flow and pressure profiles of natural gas networks during dynamic transients. We find that the estimation problem is ill-posed (because of the infinite-dimensional nature of the states) and that this leads to instability of the estimator when short estimation horizons are used. To circumvent this issue, we propose moving horizon strategies that incorporate prior information. In particular, we propose a strategy that initializes the prior using steady-state information and compare its performance against a strategy that does not initialize the prior. We find that both strategies are capable ofmore » tracking the state profiles but we also find that superior performance is obtained with steady-state prior initialization. We also find that, under the proposed framework, pressure sensor information at junctions is sufficient to track the state profiles. We also derive approximate transport models and show that some of these can be used to achieve significant computational speed-ups without sacrificing estimation performance. We show that the estimator can be easily implemented in the graph-based modeling framework Plasmo.jl and use a multipipeline network study to demonstrate the developments.« less
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
Hsu, HE; Rydzak, CE; Cotich, KL; Wang, B; Sax, PE; Losina, E; Freedberg, KA; Goldie, SJ; Lu, Z; Walensky, RP
2010-01-01
Objectives We quantified the benefits (life expectancy gains) and harms (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the United States. Methods We used data from the Women’s Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Results Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Conclusions Use of non-efavirenz-based initial antiretroviral therapy in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a tradeoff between these two risks; this study can inform discussions between patients and health care providers. PMID:20561082
Hsu, H E; Rydzak, C E; Cotich, K L; Wang, B; Sax, P E; Losina, E; Freedberg, K A; Goldie, S J; Lu, Z; Walensky, R P
2011-02-01
The aim of the study was to quantify the benefits (life expectancy gains) and risks (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the USA. We used data from the Women's Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy (ART) regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Use of non-efavirenz-based initial ART in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a trade-off between these two risks; this study can inform discussions between patients and health care providers.
Battery state-of-charge estimation using approximate least squares
NASA Astrophysics Data System (ADS)
Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.
2015-03-01
In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
Prioritizing Scientific Initiatives.
ERIC Educational Resources Information Center
Bahcall, John N.
1991-01-01
Discussed is the way in which a limited number of astronomy research initiatives were chosen and prioritized based on a consensus of members from the Astronomy and Astrophysics Survey Committee. A list of recommended equipment initiatives and estimated costs is provided. (KR)
NASA Astrophysics Data System (ADS)
Fan, Jishan; Li, Fucai; Nakamura, Gen
2018-06-01
In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.
ERIC Educational Resources Information Center
Cheadle, Allen; Schwartz, Pamela M.; Rauzon, Suzanne; Bourcier, Emily; Senter, Sandra; Spring, Rebecca; Beery, William L.
2013-01-01
When planning and evaluating community-level initiatives focused on policy and environment change, it is useful to have estimates of the impact on behavioral outcomes of particular strategies (e.g., building a new walking trail to promote physical activity). We have created a measure of estimated strategy-level impact--"population dose"--based on…
Inertial sensor-based smoother for gait analysis.
Suh, Young Soo
2014-12-17
An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).
New learning based super-resolution: use of DWT and IGMRF prior.
Gajjar, Prakash P; Joshi, Manjunath V
2010-05-01
In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.
Multiple scene attitude estimator performance for LANDSAT-1
NASA Technical Reports Server (NTRS)
Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.
1979-01-01
Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.
NASA Technical Reports Server (NTRS)
Wasilewski, P. J.; Obryan, M. V.
1994-01-01
The topics discussed include the following: chondrule magnetic properties; chondrules from the same meteorite; and REM values (the ratio for remanence initially measured to saturation remanence in 1 Tesla field). The preliminary field estimates for chondrules magnetizing environments range from minimal to a least several mT. These estimates are based on REM values and the characteristics of the remanence initially measured (natural remanence) thermal demagnetization compared to the saturation remanence in 1 Tesla field demagnetization.
Estimating mangrove in Florida: trials monitoring rare ecosystems
Mark J. Brown
2015-01-01
Mangrove species are keystone components in coastal ecosystems and are the interface between forest land and sea. Yet, estimates of their area have varied widely. Forest Inventory and Analysis (FIA) data from ground-based sample plots provide one estimate of the resource. Initial FIA estimates of the mangrove resource in Florida varied dramatically from those compiled...
Setting the scene for SWOT: global maps of river reach hydrodynamic variables
NASA Astrophysics Data System (ADS)
Schumann, Guy J.-P.; Durand, Michael; Pavelsky, Tamlin; Lion, Christine; Allen, George
2017-04-01
Credible and reliable characterization of discharge from the Surface Water and Ocean Topography (SWOT) mission using the Manning-based algorithms needs a prior estimate constraining reach-scale channel roughness, base flow and river bathymetry. For some places, any one of those variables may exist locally or even regionally as a measurement, which is often only at a station, or sometimes as a basin-wide model estimate. However, to date none of those exist at the scale required for SWOT and thus need to be mapped at a continental scale. The prior estimates will be employed for producing initial discharge estimates, which will be used as starting-guesses for the various Manning-based algorithms, to be refined using the SWOT measurements themselves. A multitude of reach-scale variables were derived, including Landsat-based width, SRTM slope and accumulation area. As a possible starting point for building the prior database of low flow, river bathymetry and channel roughness estimates, we employed a variety of sources, including data from all GRDC records, simulations from the long-time runs of the global water balance model (WBM), and reach-based calculations from hydraulic geometry relationships as well as Manning's equation. Here, we present the first global maps of this prior database with some initial validation, caveats and prospective uses.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
A Track Initiation Method for the Underwater Target Tracking Environment
NASA Astrophysics Data System (ADS)
Li, Dong-dong; Lin, Yang; Zhang, Yao
2018-04-01
A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
The algorithm of motion blur image restoration based on PSF half-blind estimation
NASA Astrophysics Data System (ADS)
Chen, Da-Ke; Lin, Zhe
2011-08-01
A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
A Proficiency-Based Cost Estimate of Surface Warfare Officer On-the-Job Training
2011-12-01
established later in the chapter. 30 b. Proficiency Gained at Initial Training Formal training learning outcomes contribute the most to the initial...different billets call for different levels of training. Additionally, BST learning outcomes are not necessarily based on SWO PQS, and therefore...process. Without knowing BDOC learning outcomes , it is difficult to quantify proficiency-based OJT cost reductions. However, it is certain that
1982-02-01
For these data elements, Initial Milestone 11 values were established as the Flanning Estimate (PE) with the Development Estimate ( DE ) to he based ...development of improved forensic collection techniques for Naval Investigative Agents on ships and overseas bases . As this is a continuing program, the above...overseas bases ), and continue development of improved forensic collection techniques for Naval Investigative Agents on ships and overseas baszs. 4. (U) FY
Estimating the Benefits of the Air Force Purchasing and Supply Chain Management Initiative
2008-01-01
sector, known as strategic sourcing.6 The Customer Relationship Management initiative ( CRM ) pro- vides a single customer point of contact for all... Customer Relationship Management initiative. commodity council A term used to describe a cross-functional sourc- ing group charged with formulating a...initiative has four major components, all based on commercial best practices (Gabreski, 2004): commodity councils customer relationship management
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Kim, Hyun Jung; Griffiths, Mansel W; Fazil, Aamir M; Lammerding, Anna M
2009-09-01
Foodborne illness contracted at food service operations is an important public health issue in Korea. In this study, the probabilities for growth of, and enterotoxin production by, Staphylococcus aureus in pork meat-based foods prepared in food service operations were estimated by the Monte Carlo simulation. Data on the prevalence and concentration of S. aureus as well as compliance to guidelines for time and temperature controls during food service operations were collected. The growth of S. aureus was initially estimated by using the U.S. Department of Agriculture's Pathogen Modeling Program. A second model based on raw pork meat was derived to compare cell number predictions. The correlation between toxin level and cell number as well as minimum toxin dose obtained from published data was adopted to quantify the probability of staphylococcal intoxication. When data gaps were found, assumptions were made based on guidelines for food service practices. Baseline risk model and scenario analyses were performed to indicate possible outcomes of staphylococcal intoxication under the scenarios generated based on these data gaps. Staphylococcal growth was predicted during holding before and after cooking, and the highest estimated concentration (4.59 log CFU/g for the 99.9th percentile value) of S. aureus was observed in raw pork initially contaminated with S. aureus and held before cooking. The estimated probability for staphylococcal intoxication was very low, using currently available data. However, scenario analyses revealed an increased possibility of staphylococcal intoxication when increased levels of initial contamination in the raw meat, andlonger holding time both before and after cooking the meat occurred.
Estimation of delays and other parameters in nonlinear functional differential equations
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lamm, P. K. D.
1983-01-01
A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.
ERIC Educational Resources Information Center
Squillace, Marie R.; Remsburg, Robin E.; Harris-Kojetin, Lauren D.; Bercovitz, Anita; Rosenoff, Emily; Han, Beth
2009-01-01
Purpose: This study introduces the first National Nursing Assistant Survey (NNAS), a major advance in the data available about certified nursing assistants (CNAs) and a rich resource for evidence-based policy, practice, and applied research initiatives. We highlight potential uses of this new survey using select population estimates as examples of…
The Effects of Hot Corrosion Pits on the Fatigue Resistance of a Disk Superalloy
NASA Technical Reports Server (NTRS)
Gabb, Timothy P.; Telesman, Jack; Hazel, Brian; Mourer, David P.
2009-01-01
The effects of hot corrosion pits on low cycle fatigue life and failure modes of the disk superalloy ME3 were investigated. Low cycle fatigue specimens were subjected to hot corrosion exposures producing pits, then tested at low and high temperatures. Fatigue lives and failure initiation points were compared to those of specimens without corrosion pits. Several tests were interrupted to estimate the fraction of fatigue life that fatigue cracks initiated at pits. Corrosion pits significantly reduced fatigue life by 60 to 98 percent. Fatigue cracks initiated at a very small fraction of life for high temperature tests, but initiated at higher fractions in tests at low temperature. Critical pit sizes required to promote fatigue cracking were estimated, based on measurements of pits initiating cracks on fracture surfaces.
Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar
NASA Astrophysics Data System (ADS)
Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan
2016-09-01
A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.
Andrew D. Richardson; Mathew Williams; David Y. Hollinger; David J.P. Moore; D. Bryan Dail; Eric A. Davidson; Neal A. Scott; Robert S. Evans; Holly. Hughes
2010-01-01
We conducted an inverse modeling analysis, using a variety of data streams (tower-based eddy covariance measurements of net ecosystem exchange, NEE, of CO2, chamber-based measurements of soil respiration, and ancillary ecological measurements of leaf area index, litterfall, and woody biomass increment) to estimate parameters and initial carbon (C...
Initial retrieval sequence and blending strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pemwell, D.L.; Grenard, C.E.
1996-09-01
This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.
Estimate of Shock-Hugoniot Adiabat of Liquids from Hydrodynamics
NASA Astrophysics Data System (ADS)
Bouton, E.; Vidal, P.
2007-12-01
Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperatures ranging from 250 K to 360 K.
NASA Technical Reports Server (NTRS)
Peters, C.; Kampe, F. (Principal Investigator)
1980-01-01
The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.
NASA Astrophysics Data System (ADS)
Jakacki, Jaromir; Golenko, Mariya
2014-05-01
Two hydrodynamical models (Princeton Ocean Model (POM) and Parallel Ocean Program (POP)) have been implemented for the Baltic Sea area that consists of locations of the dumped chemical munitions during II War World. The models have been configured based on similar data source - bathymetry, initial conditions and external forces were implemented based on identical data. The horizontal resolutions of the models are also very similar. Several simulations with different initial conditions have been done. Comparison and analysis of the bottom currents from both models have been performed. Based on it estimating of the dangerous area and critical time have been done. Also lagrangian particle tracking and passive tracer were implemented and based on these results probability of the appearing dangerous doses and its time evolution have been presented. This work has been performed in the frame of the MODUM project financially supported by NATO.
Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy
2014-01-21
To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
Strain measurement based battery testing
Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk
2017-05-23
A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.
Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts
2012-10-01
To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.
Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts
2012-01-01
Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661
Vehicle speed affects both pre-skid braking kinematics and average tire/roadway friction.
Heinrichs, Bradley E; Allin, Boyd D; Bowler, James J; Siegmund, Gunter P
2004-09-01
Vehicles decelerate between brake application and skid onset. To better estimate a vehicle's speed and position at brake application, we investigated how vehicle deceleration varied with initial speed during both the pre-skid and skidding intervals on dry asphalt. Skid-to-stop tests were performed from four initial speeds (20, 40, 60, and 80 km/h) using three different grades of tire (economy, touring, and performance) on a single vehicle and a single road surface. Average skidding friction was found to vary with initial speed and tire type. The post-brake/pre-skid speed loss, elapsed time, distance travelled, and effective friction were found to vary with initial speed. Based on these data, a method using skid mark length to predict vehicle speed and position at brake application rather than skid onset was shown to improve estimates of initial vehicle speed by up to 10 km/h and estimates of vehicle position at brake application by up to 8 m compared to conventional methods that ignore the post-brake/pre-skid interval. Copyright 2003 Elsevier Ltd.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.
Decay in blood loss estimation skills after web-based didactic training.
Toledo, Paloma; Eosakul, Stanley T; Goetz, Kristopher; Wong, Cynthia A; Grobman, William A
2012-02-01
Accuracy in blood loss estimation has been shown to improve immediately after didactic training. The objective of this study was to evaluate retention of blood loss estimation skills 9 months after a didactic web-based training. Forty-four participants were recruited from a cohort that had undergone web-based training and testing in blood loss estimation. The web-based posttraining test, consisting of pictures of simulated blood loss, was repeated 9 months after the initial training and testing. The primary outcome was the difference in accuracy of estimated blood loss (percent error) at 9 months compared with immediately posttraining. At the 9-month follow-up, the median error in estimation worsened to -34.6%. Although better than the pretraining error of -47.8% (P = 0.003), the 9-month error was significantly less accurate than the immediate posttraining error of -13.5% (P = 0.01). Decay in blood loss estimation skills occurs by 9 months after didactic training.
Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino
2018-02-22
CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
Estimate of shock-Hugoniot adiabat of liquids from hydrodyamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouton, E.; Vidal, P.
2007-12-12
Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperaturesmore » ranging from 250 K to 360 K.« less
Objective estimates based on experimental data and initial and final knowledge
NASA Technical Reports Server (NTRS)
Rosenbaum, B. M.
1972-01-01
An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.
Data assimilation method based on the constraints of confidence region
NASA Astrophysics Data System (ADS)
Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng
2018-03-01
The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.
Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial
NASA Astrophysics Data System (ADS)
Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.
2011-03-01
Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.
ERIC Educational Resources Information Center
LoPresto, Michael C.; Hubble-Zdanowski, Jennifer
2012-01-01
The "Life in the Universe Survey" is a twelve-question assessment instrument. Largely based on the factors of the Drake equation, it is designed to survey students' initial estimates of its factors and to gauge how estimates change with instruction. The survey was used in sections of a seminar course focusing specifically on life in the universe…
Huizinga, Richard J.
2014-01-01
The rainfall-runoff pairs from the storm-specific GUH analysis were further analyzed against various basin and rainfall characteristics to develop equations to estimate the peak streamflow and flood volume based on a quantity of rainfall on the basin.
Lin, Chi-Yueh; Wang, Hsiao-Chuan
2011-07-01
The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Quick estimate of oil discovery from gas-condensate reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarem, A.M.
1966-10-24
A quick method of estimating the depletion performance of gas-condensate reservoirs is presented by graphical representations. The method is based on correlations reported in the literature and expresses recoverable liquid as function of gas reserves, producing gas-oil ratio, and initial and final reservoir pressures. The amount of recoverable liquid reserves (RLR) under depletion conditions, is estimated from an equation which is given. Where the liquid-reserves are in stock-tank barrels the gas reserves are in Mcf, with the arbitrary constant, N calculated from one graphical representation by dividing fractional oil recovery by the initial gas-oil ratio and multiplying 10U6D for convenience.more » An equation is given for estimating the coefficient C. These factors (N and C) can be determined from the graphical representations. An example calculation is included.« less
Automated startup of the MIT research reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwok, K.S.
1992-01-01
This summary describes the development, implementation, and testing of a generic method for performing automated startups of nuclear reactors described by space-independent kinetics under conditions of closed-loop digital control. The technique entails first obtaining a reliable estimate of the reactor's initial degree of subcriticality and then substituting that estimate into a model-based control law so as to permit a power increase from subcritical on a demanded trajectory. The estimation of subcriticality is accomplished by application of the perturbed reactivity method. The shutdown reactor is perturbed by the insertion of reactivity at a known rate. Observation of the resulting period permitsmore » determination of the initial degree of subcriticality. A major advantage to this method is that repeated estimates are obtained of the same quantity. Hence, statistical methods can be applied to improve the quality of the calculation.« less
A theoretical framework to predict the most likely ion path in particle imaging.
Collins-Fekete, Charles-Antoine; Volz, Lennart; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2017-03-07
In this work, a generic rigorous Bayesian formalism is introduced to predict the most likely path of any ion crossing a medium between two detection points. The path is predicted based on a combination of the particle scattering in the material and measurements of its initial and final position, direction and energy. The path estimate's precision is compared to the Monte Carlo simulated path. Every ion from hydrogen to carbon is simulated in two scenarios, (1) where the range is fixed and (2) where the initial velocity is fixed. In the scenario where the range is kept constant, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.50 mm) and the helium path estimate (0.18 mm), but less so up to the carbon path estimate (0.09 mm). However, this scenario is identified as the configuration that maximizes the dose while minimizing the path resolution. In the scenario where the initial velocity is fixed, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.29 mm) and the helium path estimate (0.09 mm) but increases for heavier ions up to carbon (0.12 mm). As a result, helium is found to be the particle with the most accurate path estimate for the lowest dose, potentially leading to tomographic images of higher spatial resolution.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
First Attempt of Orbit Determination of SLR Satellites and Space Debris Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Deleflie, F.; Coulot, D.; Descosta, R.; Fernier, A.; Richard, P.
2013-08-01
We present an orbit determination method based on genetic algorithms. Contrary to usual estimation methods mainly based on least-squares methods, these algorithms do not require any a priori knowledge of the initial state vector to be estimated. These algorithms can be applied when a new satellite is launched or for uncatalogued objects that appear in images obtained from robotic telescopes such as the TAROT ones. We show in this paper preliminary results obtained from an SLR satellite, for which tracking data acquired by the ILRS network enable to build accurate orbital arcs at a few centimeter level, which can be used as a reference orbit ; in this case, the basic observations are made up of time series of ranges, obtained from various tracking stations. We show as well the results obtained from the observations acquired by the two TAROT telescopes on the Telecom-2D satellite operated by CNES ; in that case, the observations are made up of time series of azimuths and elevations, seen from the two TAROT telescopes. The method is carried out in several steps: (i) an analytical propagation of the equations of motion, (ii) an estimation kernel based on genetic algorithms, which follows the usual steps of such approaches: initialization and evolution of a selected population, so as to determine the best parameters. Each parameter to be estimated, namely each initial keplerian element, has to be searched among an interval that is preliminary chosen. The algorithm is supposed to converge towards an optimum over a reasonable computational time.
Image registration based on subpixel localization and Cauchy-Schwarz divergence
NASA Astrophysics Data System (ADS)
Ge, Yongxin; Yang, Dan; Zhang, Xiaohong; Lu, Jiwen
2010-07-01
We define a new matching metric-corner Cauchy-Schwarz divergence (CCSD) and present a new approach based on the proposed CCSD and subpixel localization for image registration. First, we detect the corners in an image by a multiscale Harris operator and take them as initial interest points. And then, a subpixel localization technique is applied to determine the locations of the corners and eliminate the false and unstable corners. After that, CCSD is defined to obtain the initial matching corners. Finally, we use random sample consensus to robustly estimate the parameters based on the initial matching. The experimental results demonstrate that the proposed algorithm has a good performance in terms of both accuracy and efficiency.
State of Charge estimation of lithium ion battery based on extended Kalman filtering algorithm
NASA Astrophysics Data System (ADS)
Yang, Fan; Feng, Yiming; Pan, Binbiao; Wan, Renzhuo; Wang, Jun
2017-08-01
Accurate estimation of state-of-charge (SOC) for lithium ion battery is crucial for real-time diagnosis and prognosis in green energy vehicles. In this paper, a state space model of the battery based on Thevenin model is adopted. The strategy of estimating state of charge (SOC) based on extended Kalman fil-ter is presented, as well as to combine with ampere-hour counting (AH) and open circuit voltage (OCV) methods. The comparison between simulation and experiments indicates that the model’s performance matches well with that of lithium ion battery. The algorithm of extended Kalman filter keeps a good accura-cy precision and less dependent on its initial value in full range of SOC, which is proved to be suitable for online SOC estimation.
Robinson, Hugh S.; Ruth, Toni K.; Gude, Justin A.; Choate, David; DeSimone, Rich; Hebblewhite, Mark; Matchett, Marc R.; Mitchell, Michael S.; Murphy, Kerry; Williams, Jim
2015-01-01
To be most effective, the scale of wildlife management practices should match the range of a particular species’ movements. For this reason, combined with our inability to rigorously or regularly census mountain lion populations, several authors have suggested that mountain lions be managed in a source-sink or metapopulation framework. We used a combination of resource selection functions, mortality estimation, and dispersal modeling to estimate cougar population levels in Montana statewide and potential population level effects of planned harvest levels. Between 1980 and 2012, 236 independent mountain lions were collared and monitored for research in Montana. From these data we used 18,695 GPS locations collected during winter from 85 animals to develop a resource selection function (RSF), and 11,726 VHF and GPS locations from 142 animals along with the locations of 6343 mountain lions harvested from 1988–2011 to validate the RSF model. Our RSF model validated well in all portions of the State, although it appeared to perform better in Montana Fish, Wildlife and Parks (MFWP) Regions 1, 2, 4 and 6, than in Regions 3, 5, and 7. Our mean RSF based population estimate for the total population (kittens, juveniles, and adults) of mountain lions in Montana in 2005 was 3926, with almost 25% of the entire population in MFWP Region 1. Estimates based on a high and low reference population estimates produce a possible range of 2784 to 5156 mountain lions statewide. Based on a range of possible survival rates we estimated the mountain lion population in Montana to be stable to slightly increasing between 2005 and 2010 with lambda ranging from 0.999 (SD = 0.05) to 1.02 (SD = 0.03). We believe these population growth rates to be a conservative estimate of true population growth. Our model suggests that proposed changes to female harvest quotas for 2013–2015 will result in an annual statewide population decline of 3% and shows that, due to reduced dispersal, changes to harvest in one management unit may affect population growth in neighboring units where smaller or even no changes were made. Uncertainty regarding dispersal levels and initial population density may have a significant effect on predictions at a management unit scale (i.e. 2000 km2), while at a regional scale (i.e. 50,000 km2) large differences in initial population density result in relatively small changes in population growth rate, and uncertainty about dispersal may not be as influential. Doubling the presumed initial density from a low estimation of 2.19 total animals per 100 km2 resulted in a difference in annual population growth rate of only 2.6% statewide when compared to high density of 4.04 total animals per 100 km2 (low initial population estimate λ = 0.99, while high initial population estimate λ = 1.03). We suggest modeling tools such as this may be useful in harvest planning at a regional and statewide level.
Active contour-based visual tracking by integrating colors, shapes, and motions.
Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen
2013-05-01
In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.
2012-10-12
structure on the evolving storm behaviour. 13 7. Large scale influences on Rapid Intensification and Extratropical Transition: RI and ET...assimilation techniques to better initialize and validate TC structures (including the intense inner core and storm asymmetries) consistent with the large...Without vortex specification, initial conditions usually contain a weak and misplaced circulation. Based on estimates of central pressure and storm size
Nishiura, Hiroshi; Chowell, Gerardo; Safan, Muntaser; Castillo-Chavez, Carlos
2010-01-07
In many parts of the world, the exponential growth rate of infections during the initial epidemic phase has been used to make statistical inferences on the reproduction number, R, a summary measure of the transmission potential for the novel influenza A (H1N1) 2009. The growth rate at the initial stage of the epidemic in Japan led to estimates for R in the range 2.0 to 2.6, capturing the intensity of the initial outbreak among school-age children in May 2009. An updated estimate of R that takes into account the epidemic data from 29 May to 14 July is provided. An age-structured renewal process is employed to capture the age-dependent transmission dynamics, jointly estimating the reproduction number, the age-dependent susceptibility and the relative contribution of imported cases to secondary transmission. Pitfalls in estimating epidemic growth rates are identified and used for scrutinizing and re-assessing the results of our earlier estimate of R. Maximum likelihood estimates of R using the data from 29 May to 14 July ranged from 1.21 to 1.35. The next-generation matrix, based on our age-structured model, predicts that only 17.5% of the population will experience infection by the end of the first pandemic wave. Our earlier estimate of R did not fully capture the population-wide epidemic in quantifying the next-generation matrix from the estimated growth rate during the initial stage of the pandemic in Japan. In order to quantify R from the growth rate of cases, it is essential that the selected model captures the underlying transmission dynamics embedded in the data. Exploring additional epidemiological information will be useful for assessing the temporal dynamics. Although the simple concept of R is more easily grasped by the general public than that of the next-generation matrix, the matrix incorporating detailed information (e.g., age-specificity) is essential for reducing the levels of uncertainty in predictions and for assisting public health policymaking. Model-based prediction and policymaking are best described by sharing fundamental notions of heterogeneous risks of infection and death with non-experts to avoid potential confusion and/or possible misuse of modelling results.
Woolley, Thomas E; Belmonte-Beitia, Juan; Calvo, Gabriel F; Hopewell, John W; Gaffney, Eamonn A; Jones, Bleddyn
2018-06-01
To estimate, from experimental data, the retreatment radiation 'tolerances' of the spinal cord at different times after initial treatment. A model was developed to show the relationship between the biological effective doses (BEDs) for two separate courses of treatment with the BED of each course being expressed as a percentage of the designated 'retreatment tolerance' BED value, denoted [Formula: see text] and [Formula: see text]. The primate data of Ang et al. ( 2001 ) were used to determine the fitted parameters. However, based on rodent data, recovery was assumed to commence 70 days after the first course was complete, and with a non-linear relationship to the magnitude of the initial BED (BED init ). The model, taking into account the above processes, provides estimates of the retreatment tolerance dose after different times. Extrapolations from the experimental data can provide conservative estimates for the clinic, with a lower acceptable myelopathy incidence. Care must be taken to convert the predicted [Formula: see text] value into a formal BED value and then a practical dose fractionation schedule. Used with caution, the proposed model allows estimations of retreatment doses with elapsed times ranging from 70 days up to three years after the initial course of treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Guoyan
2010-04-15
Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less
Assessing Tuberculosis Case Fatality Ratio: A Meta-Analysis
Straetemans, Masja; Glaziou, Philippe; Bierrenbach, Ana L.; Sismanidis, Charalambos; van der Werf, Marieke J.
2011-01-01
Background Recently, the tuberculosis (TB) Task Force Impact Measurement acknowledged the need to review the assumptions underlying the TB mortality estimates published annually by the World Health Organization (WHO). TB mortality is indirectly measured by multiplying estimated TB incidence with estimated case fatality ratio (CFR). We conducted a meta-analysis to estimate the TB case fatality ratio in TB patients having initiated TB treatment. Methods We searched for eligible studies in the PubMed and Embase databases through March 4th 2011 and by reference listing of relevant review articles. Main analyses included the estimation of the pooled percentages of: a) TB patients dying due to TB after having initiated TB treatment and b) TB patients dying during TB treatment. Pooled percentages were estimated using random effects regression models on the combined patient population from all studies. Main Results We identified 69 relevant studies of which 22 provided data on mortality due to TB and 59 provided data on mortality during TB treatment. Among HIV infected persons the pooled percentage of TB patients dying due to TB was 9.2% (95% Confidence Interval (CI): 3.7%–14.7%) and among HIV uninfected persons 3.0% (95% CI: −1.2%–7.4%) based on the results of eight and three studies respectively providing data for this analyses. The pooled percentage of TB patients dying during TB treatment was 18.8% (95% CI: 14.8%–22.8%) among HIV infected patients and 3.5% (95% CI: 2.0%–4.92%) among HIV uninfected patients based on the results of 27 and 19 studies respectively. Conclusion The results of the literature review are useful in generating prior distributions of CFR in countries with vital registration systems and have contributed towards revised estimates of TB mortality This literature review did not provide us with all data needed for a valid estimation of TB CFR in TB patients initiating TB treatment. PMID:21738585
Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage
Lee, Kyuman; Baek, Hoki; Lim, Jaesung
2016-01-01
The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user positioning. PMID:27529252
Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage.
Lee, Kyuman; Baek, Hoki; Lim, Jaesung
2016-08-12
The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user positioning.
An activity-based methodology for operations cost analysis
NASA Technical Reports Server (NTRS)
Korsmeyer, David; Bilby, Curt; Frizzell, R. A.
1991-01-01
This report describes an activity-based cost estimation method, proposed for the Space Exploration Initiative (SEI), as an alternative to NASA's traditional mass-based cost estimation method. A case study demonstrates how the activity-based cost estimation technique can be used to identify the operations that have a significant impact on costs over the life cycle of the SEI. The case study yielded an operations cost of $101 billion for the 20-year span of the lunar surface operations for the Option 5a program architecture. In addition, the results indicated that the support and training costs for the missions were the greatest contributors to the annual cost estimates. A cost-sensitivity analysis of the cultural and architectural drivers determined that the length of training and the amount of support associated with the ground support personnel for mission activities are the most significant cost contributors.
NASA Astrophysics Data System (ADS)
Or, D.; von Ruette, J.; Lehmann, P.
2017-12-01
Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.
Antonarakis, Alexander S; Saatchi, Sassan S; Chazdon, Robin L; Moorcroft, Paul R
2011-06-01
Insights into vegetation and aboveground biomass dynamics within terrestrial ecosystems have come almost exclusively from ground-based forest inventories that are limited in their spatial extent. Lidar and synthetic-aperture Radar are promising remote-sensing-based techniques for obtaining comprehensive measurements of forest structure at regional to global scales. In this study we investigate how Lidar-derived forest heights and Radar-derived aboveground biomass can be used to constrain the dynamics of the ED2 terrestrial biosphere model. Four-year simulations initialized with Lidar and Radar structure variables were compared against simulations initialized from forest-inventory data and output from a long-term potential-vegtation simulation. Both height and biomass initializations from Lidar and Radar measurements significantly improved the representation of forest structure within the model, eliminating the bias of too many large trees that arose in the potential-vegtation-initialized simulation. The Lidar and Radar initializations decreased the proportion of larger trees estimated by the potential vegetation by approximately 20-30%, matching the forest inventory. This resulted in improved predictions of ecosystem-scale carbon fluxes and structural dynamics compared to predictions from the potential-vegtation simulation. The Radar initialization produced biomass values that were 75% closer to the forest inventory, with Lidar initializations producing canopy height values closest to the forest inventory. Net primary production values for the Radar and Lidar initializations were around 6-8% closer to the forest inventory. Correcting the Lidar and Radar initializations for forest composition resulted in improved biomass and basal-area dynamics as well as leaf-area index. Correcting the Lidar and Radar initializations for forest composition and fine-scale structure by combining the remote-sensing measurements with ground-based inventory data further improved predictions, suggesting that further improvements of structural and carbon-flux metrics will also depend on obtaining reliable estimates of forest composition and accurate representation of the fine-scale vertical and horizontal structure of plant canopies.
Duke, Lori J; Staton, April G; McCullough, Elizabeth S; Jain, Rahul; Miller, Mindi S; Lynn Stevenson, T; Fetterman, James W; Lynn Parham, R; Sheffield, Melody C; Unterwagner, Whitney L; McDuffie, Charles H
2012-04-10
To document the annual number of advanced pharmacy practice experience (APPE) placement changes for students across 5 colleges and schools of pharmacy, identify and compare initiating reasons, and estimate the associated administrative workload. Data collection occurred from finalization of the 2008-2009 APPE assignments throughout the last date of the APPE schedule. Internet-based customized tracking forms were used to categorize the initiating reason for the placement change and the administrative time required per change (0 to 120 minutes). APPE placement changes per institution varied from 14% to 53% of total assignments. Reasons for changes were: administrator initiated (20%), student initiated (23%), and site/preceptor initiated (57%) Total administrative time required per change varied across institutions from 3,130 to 22,750 minutes, while the average time per reassignment was 42.5 minutes. APPE placements are subject to high instability. Significant differences exist between public and private colleges and schools of pharmacy as to the number and type of APPE reassignments made and associated workload estimates.
Thein, Hla-Hla; Jembere, Nathaniel; Thavorn, Kednapa; Chan, Kelvin K W; Coyte, Peter C; de Oliveira, Claire; Hur, Chin; Earle, Craig C
2018-06-27
Esophageal adenocarcinoma (EAC) incidence is increasing rapidly. Esophageal cancer has the second lowest 5-year survival rate of people diagnosed with cancer in Canada. Given the poor survival and the potential for further increases in incidence, phase-specific cost estimates constitute an important input for economic evaluation of prevention, screening, and treatment interventions. The study aims to estimate phase-specific net direct medical costs of care attributable to EAC, costs stratified by cancer stage and treatment, and predictors of total net costs of care for EAC. A population-based retrospective cohort study was conducted using Ontario Cancer Registry-linked administrative health data from 2003 to 2011. The mean net costs of EAC care per 30 patient-days (2016 CAD) were estimated from the payer perspective using phase of care approach and generalized estimating equations. Predictors of net cost by phase of care were based on a generalized estimating equations model with a logarithmic link and gamma distribution adjusting for sociodemographic and clinical factors. The mean net costs of EAC care per 30 patient-days were $1016 (95% CI, $955-$1078) in the initial phase, $669 (95% CI, $594-$743) in the continuing care phase, and $8678 (95% CI, $8217-$9139) in the terminal phase. Overall, stage IV at diagnosis and surgery plus radiotherapy for EAC incurred the highest cost, particularly in the terminal phase. Strong predictors of higher net costs were receipt of chemotherapy plus radiotherapy, surgery plus chemotherapy, radiotherapy alone, surgery alone, and chemotherapy alone in the initial and continuing care phases, stage III-IV disease and patients diagnosed with EAC later in a calendar year (2007-2011) in the initial and terminal phases, comorbidity in the continuing care phase, and older age at diagnosis (70-74 years), and geographic region in the terminal phase. Costs of care vary by phase of care, stage at diagnosis, and type of treatment for EAC. These cost estimates provide information to guide future resource allocation decisions, and clinical and policy interventions to reduce the burden of EAC.
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.
Facente, Shelley N; Grebe, Eduard; Burk, Katie; Morris, Meghan D; Murphy, Edward L; Mirzazadeh, Ali; Smith, Aaron A; Sanchez, Melissa A; Evans, Jennifer L; Nishimura, Amy; Raymond, Henry F
2018-01-01
Initiated in 2016, End Hep C SF is a comprehensive initiative to eliminate hepatitis C (HCV) infection in San Francisco. The introduction of direct-acting antivirals to treat and cure HCV provides an opportunity for elimination. To properly measure progress, an estimate of baseline HCV prevalence, and of the number of people in various subpopulations with active HCV infection, is required to target and measure the impact of interventions. Our analysis was designed to incorporate multiple relevant data sources and estimate HCV burden for the San Francisco population as a whole, including specific key populations at higher risk of infection. Our estimates are based on triangulation of data found in case registries, medical records, observational studies, and published literature from 2010 through 2017. We examined subpopulations based on sex, age and/or HCV risk group. When multiple sources of data were available for subpopulation estimates, we calculated a weighted average using inverse variance weighting. Credible ranges (CRs) were derived from 95% confidence intervals of population size and prevalence estimates. We estimate that 21,758 residents of San Francisco are HCV seropositive (CR: 10,274-42,067), representing an overall seroprevalence of 2.5% (CR: 1.2%- 4.9%). Of these, 16,408 are estimated to be viremic (CR: 6,505-37,407), though this estimate includes treated cases; up to 12,257 of these (CR: 2,354-33,256) are people who are untreated and infectious. People who injected drugs in the last year represent 67.9% of viremic HCV infections. We estimated approximately 7,400 (51%) more HCV seropositive cases than are included in San Francisco's HCV surveillance case registry. Our estimate provides a useful baseline against which the impact of End Hep C SF can be measured.
Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C
2013-12-21
Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.
NASA Astrophysics Data System (ADS)
Redemann, J.; Livingston, J. M.; Shinozuka, Y.; Kacenelenbogen, M. S.; Russell, P. B.; LeBlanc, S. E.; Vaughan, M.; Ferrare, R. A.; Hostetler, C. A.; Rogers, R. R.; Burton, S. P.; Torres, O.; Remer, L. A.; Stier, P.; Schutgens, N.
2014-12-01
We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). For the first time, we present comparisons of our multi-sensor aerosol direct radiative forcing estimates to values derived from a subset of models that participated in the latest AeroCom initiative. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.
A two-step super-Gaussian independent component analysis approach for fMRI data.
Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying
2015-09-01
Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.
Walsh, L; Zhang, W; Shore, R E; Auvinen, A; Laurier, D; Wakeford, R; Jacob, P; Gent, N; Anspaugh, L R; Schüz, J; Kesminiene, A; van Deventer, E; Tritscher, A; del Rosarion Pérez, M
2014-11-01
We present here a methodology for health risk assessment adopted by the World Health Organization that provides a framework for estimating risks from the Fukushima nuclear accident after the March 11, 2011 Japanese major earthquake and tsunami. Substantial attention has been given to the possible health risks associated with human exposure to radiation from damaged reactors at the Fukushima Daiichi nuclear power station. Cumulative doses were estimated and applied for each post-accident year of life, based on a reference level of exposure during the first year after the earthquake. A lifetime cumulative dose of twice the first year dose was estimated for the primary radionuclide contaminants ((134)Cs and (137)Cs) and are based on Chernobyl data, relative abundances of cesium isotopes, and cleanup efforts. Risks for particularly radiosensitive cancer sites (leukemia, thyroid and breast cancer), as well as the combined risk for all solid cancers were considered. The male and female cumulative risks of cancer incidence attributed to radiation doses from the accident, for those exposed at various ages, were estimated in terms of the lifetime attributable risk (LAR). Calculations of LAR were based on recent Japanese population statistics for cancer incidence and current radiation risk models from the Life Span Study of Japanese A-bomb survivors. Cancer risks over an initial period of 15 years after first exposure were also considered. LAR results were also given as a percentage of the lifetime baseline risk (i.e., the cancer risk in the absence of radiation exposure from the accident). The LAR results were based on either a reference first year dose (10 mGy) or a reference lifetime dose (20 mGy) so that risk assessment may be applied for relocated and non-relocated members of the public, as well as for adult male emergency workers. The results show that the major contribution to LAR from the reference lifetime dose comes from the first year dose. For a dose of 10 mGy in the first year and continuing exposure, the lifetime radiation-related cancer risks based on lifetime dose (which are highest for children under 5 years of age at initial exposure), are small, and much smaller than the lifetime baseline cancer risks. For example, after initial exposure at age 1 year, the lifetime excess radiation risk and baseline risk of all solid cancers in females were estimated to be 0.7 · 10(-2) and 29.0 · 10(-2), respectively. The 15 year risks based on the lifetime reference dose are very small. However, for initial exposure in childhood, the 15 year risks based on the lifetime reference dose are up to 33 and 88% as large as the 15 year baseline risks for leukemia and thyroid cancer, respectively. The results may be scaled to particular dose estimates after consideration of caveats. One caveat is related to the lack of epidemiological evidence defining risks at low doses, because the predicted risks come from cancer risk models fitted to a wide dose range (0-4 Gy), which assume that the solid cancer and leukemia lifetime risks for doses less than about 0.5 Gy and 0.2 Gy, respectively, are proportional to organ/tissue doses: this is unlikely to seriously underestimate risks, but may overestimate risks. This WHO-HRA framework may be used to update the risk estimates, when new population health statistics data, dosimetry information and radiation risk models become available.
1982-09-01
characteristics) for one or more aircraft which had been tempor- arily excluded from the data base. Provided these results proved satis- factory , all of the...8217 . ; ;AI - U .*- .. ." ... , -Lt" U :% 170.,, ..’ ,:, -:iZ,: . APPENDIX G FACTOR ANALYSIS INITIAL 1 71 239= RUN UKI FACTOR ANALISIS
Estimation of effective connectivity using multi-layer perceptron artificial neural network.
Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman
2018-02-01
Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.
NASA Technical Reports Server (NTRS)
Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)
2002-01-01
This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.
Nickel, Nathan C; Martens, Patricia J; Chateau, Dan; Brownell, Marni D; Sarkar, Joykrishna; Goh, Chun Yan; Burland, Elaine; Taylor, Carole; Katz, Alan
2014-07-31
Breastfeeding is associated with improved health. Surveillance data show that breastfeeding initiation rates have increased; however, limited work has examined trends in socio-economic inequalities in initiation. The study's research question was whether socio-economic inequalities in breastfeeding initiation have changed over the past 20 years. This population-based study is a project within PATHS Equity for Children. Analyses used hospital discharge data for Manitoba mother-infant dyads with live births, 1988-2011 (n=316,027). Income quintiles were created, each with ~20% of dyads. Three-year, overall and by-quintile breastfeeding initiation rates were estimated for Manitoba and two hospitals. Age-adjusted rates were estimated for Manitoba. Rates were modelled using generalized linear models. Three measures, rate ratios (RRs), rate differences (RDs) and concentration indices, assessed inequality at each time point. We also compared concentration indices with Gini coefficients to assess breastfeeding inequality vis-à-vis income inequality. Trend analyses tested for changes over time. Manitoba and Hospital A initiation rates increased; Hospital B rates did not change. Significant inequalities existed in nearly every period, across all three measures: RRs, RDs and concentration indices. RRs and concentration indices suggested little to no change in inequality from 1988 to 2011. RDs for Manitoba (comparing initiation in the highest to lowest income quintiles) did not change significantly over time. RDs decreased for Hospital A, suggesting decreasing socio-economic inequalities in breastfeeding; RDs increased for Hospital B. Income inequality increased significantly in Manitoba during the study period. Overall breastfeeding initiation rates can improve while inequality persists or worsens.
Evaluating Satellite-based Rainfall Estimates for Basin-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Yilmaz, K. K.; Hogue, T. S.; Hsu, K.; Gupta, H. V.; Mahani, S. E.; Sorooshian, S.
2003-12-01
The reliability of any hydrologic simulation and basin outflow prediction effort depends primarily on the rainfall estimates. The problem of estimating rainfall becomes more obvious in basins with scarce or no rain gauges. We present an evaluation of satellite-based rainfall estimates for basin-scale hydrologic modeling with particular interest in ungauged basins. The initial phase of this study focuses on comparison of mean areal rainfall estimates from ground-based rain gauge network, NEXRAD radar Stage-III, and satellite-based PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and their influence on hydrologic model simulations over several basins in the U.S. Six-hourly accumulations of the above competing mean areal rainfall estimates are used as input to the Sacramento Soil Moisture Accounting Model. Preliminary experiments for the Leaf River Basin in Mississippi, for the period of March 2000 - June 2002, reveals that seasonality plays an important role in the comparison. There is an overestimation during the summer and underestimation during the winter in satellite-based rainfall with respect to the competing rainfall estimates. The consequence of this result on the hydrologic model is that simulated discharge underestimates the major observed peak discharges during early spring for the basin under study. Future research will entail developing correction procedures, which depend on different factors such as seasonality, geographic location and basin size, for satellite-based rainfall estimates over basins with dense rain gauge network and/or radar coverage. Extension of these correction procedures to satellite-based rainfall estimates over ungauged basins with similar characteristics has the potential for reducing the input uncertainty in ungauged basin modeling efforts.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188
Estimating Antarctica land topography from GRACE gravity and ICESat altimetry data
NASA Astrophysics Data System (ADS)
Wu, I.; Chao, B. F.; Chen, Y.
2009-12-01
We propose a new method combining GRACE (Gravity Recovery and Climate Experiment) gravity and ICESat (Ice, Cloud, and land Elevation Satellite) altimetry data to estimate the land topography for Antarctica. Antarctica is the fifth-largest continent in the world and about 98% of Antarctica is covered by ice, where in-situ measurements are difficult. Some experimental airborne radar and ground-based radar data have revealed very limited land topography beneath heavy ice sheet. To estimate the land topography for the full coverage of Antarctica, we combine GRACE data that indicate the mass distribution, with data of ICESat laser altimetry that provide high-resolution mapping of ice topography. Our approach is actually based on some geological constraints: assuming uniform densities of the land and ice considering the Airy-type isostasy. In the beginning we construct an initial model for the ice thickness and land topography based on the BEDMAP ice thickness and ICESat data. Thereafter we forward compute the model’s gravity field and compare with the GRACE observed data. Our initial model undergoes the adjustments to improve the fit between modeled results and the observed data. Final examination is done by comparing our results with previous but sparse observations of ice thickness to reconfirm the reliability of our results. As the gravitational inversion problem is non-unique, our estimating result is just one of all possibilities constrained by available data in optimal way.
Kalman Filters for Time Delay of Arrival-Based Source Localization
NASA Astrophysics Data System (ADS)
Klee, Ulrich; Gehrig, Tobias; McDonough, John
2006-12-01
In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.
Vehicle detection and orientation estimation using the radon transform
NASA Astrophysics Data System (ADS)
Pelapur, Rengarajan; Bunyak, Filiz; Palaniappan, Kannappan; Seetharaman, Gunasekaran
2013-05-01
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15? of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within +/-1.0° of the ground truth.
Cooper, Caren B
2014-09-01
Accurate phenology data, such as the timing of migration and reproduction, is important for understanding how climate change influences birds. Given contradictory findings among localized studies regarding mismatches in timing of reproduction and peak food supply, broader-scale information is needed to understand how whole species respond to environmental change. Citizen science-participation of the public in genuine research-increases the geographic scale of research. Recent studies, however, showed weekend bias in reported first-arrival dates for migratory songbirds in databases created by citizen-science projects. I investigated whether weekend bias existed for clutch-initiation dates for common species in US citizen-science projects. Participants visited nests on Saturdays more frequently than other days. When participants visited nests during the laying stage, biased timing of visits did not translate into bias in estimated clutch-initiation dates, based on back-dating with the assumption of one egg laid per day. Participants, however, only visited nests during the laying stage for 25% of attempts of cup-nesting species and 58% of attempts in nest boxes. In some years, in lieu of visit data, participants provided their own estimates of clutch-initiation dates and were asked "did you visit the nest during the laying period?" Those participants who answered the question provided estimates of clutch-initiation dates with no day-of-week bias, irrespective of their answer. Those who did not answer the question were more likely to estimate clutch initiation on a Saturday. Data from citizen-science projects are useful in phenological studies when temporal biases can be checked and corrected through protocols and/or analytical methods.
NASA Astrophysics Data System (ADS)
Cooper, Caren B.
2014-09-01
Accurate phenology data, such as the timing of migration and reproduction, is important for understanding how climate change influences birds. Given contradictory findings among localized studies regarding mismatches in timing of reproduction and peak food supply, broader-scale information is needed to understand how whole species respond to environmental change. Citizen science—participation of the public in genuine research—increases the geographic scale of research. Recent studies, however, showed weekend bias in reported first-arrival dates for migratory songbirds in databases created by citizen-science projects. I investigated whether weekend bias existed for clutch-initiation dates for common species in US citizen-science projects. Participants visited nests on Saturdays more frequently than other days. When participants visited nests during the laying stage, biased timing of visits did not translate into bias in estimated clutch-initiation dates, based on back-dating with the assumption of one egg laid per day. Participants, however, only visited nests during the laying stage for 25 % of attempts of cup-nesting species and 58 % of attempts in nest boxes. In some years, in lieu of visit data, participants provided their own estimates of clutch-initiation dates and were asked "did you visit the nest during the laying period?" Those participants who answered the question provided estimates of clutch-initiation dates with no day-of-week bias, irrespective of their answer. Those who did not answer the question were more likely to estimate clutch initiation on a Saturday. Data from citizen-science projects are useful in phenological studies when temporal biases can be checked and corrected through protocols and/or analytical methods.
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
Crajé, Céline; Santello, Marco; Gordon, Andrew M
2013-01-01
Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
NASA Astrophysics Data System (ADS)
Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn
EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.
Staton, April G.; McCullough, Elizabeth S.; Jain, Rahul; Miller, Mindi S.; Lynn Stevenson, T.; Fetterman, James W.; Lynn Parham, R.; Sheffield, Melody C.; Unterwagner, Whitney L.; McDuffie, Charles H.
2012-01-01
Objective. To document the annual number of advanced pharmacy practice experience (APPE) placement changes for students across 5 colleges and schools of pharmacy, identify and compare initiating reasons, and estimate the associated administrative workload. Methods. Data collection occurred from finalization of the 2008-2009 APPE assignments throughout the last date of the APPE schedule. Internet-based customized tracking forms were used to categorize the initiating reason for the placement change and the administrative time required per change (0 to 120 minutes). Results. APPE placement changes per institution varied from 14% to 53% of total assignments. Reasons for changes were: administrator initiated (20%), student initiated (23%), and site/preceptor initiated (57%) Total administrative time required per change varied across institutions from 3,130 to 22,750 minutes, while the average time per reassignment was 42.5 minutes. Conclusion. APPE placements are subject to high instability. Significant differences exist between public and private colleges and schools of pharmacy as to the number and type of APPE reassignments made and associated workload estimates. PMID:22544966
Wavelet-based hierarchical surface approximation from height fields
Sang-Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt
2004-01-01
This paper presents a novel hierarchical approach to triangular mesh generation from height fields. A wavelet-based multiresolution analysis technique is used to estimate local shape information at different levels of resolution. Using predefined templates at the coarsest level, the method constructs an initial triangulation in which underlying object shapes are well...
Coldman, Andrew; Phillips, Norm
2013-07-09
There has been growing interest in the overdiagnosis of breast cancer as a result of mammography screening. We report incidence rates in British Columbia before and after the initiation of population screening and provide estimates of overdiagnosis. We obtained the numbers of breast cancer diagnoses from the BC Cancer Registry and screening histories from the Screening Mammography Program of BC for women aged 30-89 years between 1970 and 2009. We calculated age-specific rates of invasive breast cancer and ductal carcinoma in situ. We compared these rates by age, calendar period and screening participation. We obtained 2 estimates of overdiagnosis from cumulative cancer rates among women between the ages of 40 and 89 years: the first estimate compared participants with nonparticipants; the second estimate compared observed and predicted population rates. We calculated participation-based estimates of overdiagnosis to be 5.4% for invasive disease alone and 17.3% when ductal carcinoma in situ was included. The corresponding population-based estimates were -0.7% and 6.7%. Participants had higher rates of invasive cancer and ductal carcinoma in situ than nonparticipants but lower rates after screening stopped. Population incidence rates for invasive cancer increased after 1980; by 2009, they had returned to levels similar to those of the 1970s among women under 60 years of age but remained elevated among women 60-79 years old. Rates of ductal carcinoma in situ increased in all age groups. The extent of overdiagnosis of invasive cancer in our study population was modest and primarily occurred among women over the age of 60 years. However, overdiagnosis of ductal carcinoma in situ was elevated for all age groups. The estimation of overdiagnosis from observational data is complex and subject to many influences. The use of mammography screening in older women has an increased risk of overdiagnosis, which should be considered in screening decisions.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Forecasting seasonal outbreaks of influenza.
Shaman, Jeffrey; Karspeck, Alicia
2012-12-11
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.
Forecasting seasonal outbreaks of influenza
Shaman, Jeffrey; Karspeck, Alicia
2012-01-01
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Extended Kalman filtering for the detection of damage in linear mechanical structures
NASA Astrophysics Data System (ADS)
Liu, X.; Escamilla-Ambrosio, P. J.; Lieven, N. A. J.
2009-09-01
This paper addresses the problem of assessing the location and extent of damage in a vibrating structure by means of vibration measurements. Frequency domain identification methods (e.g. finite element model updating) have been widely used in this area while time domain methods such as the extended Kalman filter (EKF) method, are more sparsely represented. The difficulty of applying EKF in mechanical system damage identification and localisation lies in: the high computational cost, the dependence of estimation results on the initial estimation error covariance matrix P(0), the initial value of parameters to be estimated, and on the statistics of measurement noise R and process noise Q. To resolve these problems in the EKF, a multiple model adaptive estimator consisting of a bank of EKF in modal domain was designed, each filter in the bank is based on different P(0). The algorithm was iterated by using the weighted global iteration method. A fuzzy logic model was incorporated in each filter to estimate the variance of the measurement noise R. The application of the method is illustrated by simulated and real examples.
Network Model-Assisted Inference from Respondent-Driven Sampling Data
Gile, Krista J.; Handcock, Mark S.
2015-01-01
Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328
Network Model-Assisted Inference from Respondent-Driven Sampling Data.
Gile, Krista J; Handcock, Mark S
2015-06-01
Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.
Cropotova, Janna; Tylewicz, Urszula; Cocci, Emiliano; Romani, Santina; Dalla Rosa, Marco
2016-03-01
The aim of the present study was to estimate the quality deterioration of apple fillings during storage. Moreover, a potentiality of novel time-saving and non-invasive method based on fluorescence microscopy for prompt ascertainment of non-enzymatic browning initiation in fruit fillings was investigated. Apple filling samples were obtained by mixing different quantities of fruit and stabilizing agents (inulin, pectin and gellan gum), thermally processed and stored for 6-month. The preservation of antioxidant capacity (determined by DPPH method) in apple fillings was indirectly correlated with decrease in total polyphenols content that varied from 34±22 to 56±17% and concomitant accumulation of 5-hydroxymethylfurfural (HMF), ranging from 3.4±0.1 to 8±1mg/kg in comparison to initial apple puree values. The mean intensity of the fluorescence emission spectra of apple filling samples and initial apple puree was highly correlated (R(2)>0.95) with the HMF content, showing a good potentiality of fluorescence microscopy method to estimate non-enzymatic browning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cherng, Sarah T; Tam, Jamie; Christine, Paul J; Meza, Rafael
2016-11-01
Electronic cigarette (e-cigarette) use has increased rapidly in recent years. Given the unknown effects of e-cigarette use on cigarette smoking behaviors, e-cigarette regulation has become the subject of considerable controversy. In the absence of longitudinal data documenting the long-term effects of e-cigarette use on smoking behavior and population smoking outcomes, computational models can guide future empirical research and provide insights into the possible effects of e-cigarette use on smoking prevalence over time. Agent-based model examining hypothetical scenarios of e-cigarette use by smoking status and e-cigarette effects on smoking initiation and smoking cessation. If e-cigarettes increase individual-level smoking cessation probabilities by 20%, the model estimates a 6% reduction in smoking prevalence by 2060 compared with baseline model (no effects) outcomes. In contrast, e-cigarette use prevalence among never smokers would have to rise dramatically from current estimates, with e-cigarettes increasing smoking initiation by more than 200% relative to baseline model estimates to achieve a corresponding 6% increase in smoking prevalence by 2060. Based on current knowledge of the patterns of e-cigarette use by smoking status and the heavy concentration of e-cigarette use among current smokers, the simulated effects of e-cigarettes on smoking cessation generate substantially larger changes to smoking prevalence compared with their effects on smoking initiation.
Cherng, Sarah T.; Tam, Jamie; Christine, Paul; Meza, Rafael
2016-01-01
Background Electronic cigarette (e-cigarette) use has increased rapidly in recent years. Given the unknown effects of e-cigarette use on cigarette smoking behaviors, e-cigarette regulation has become the subject of considerable controversy. In the absence of longitudinal data documenting the long-term effects of e-cigarette use on smoking behavior and population smoking outcomes, computational models can guide future empirical research and provide insights into the possible effects of e-cigarette use on smoking prevalence over time. Methods Agent-based model examining hypothetical scenarios of e-cigarette use by smoking status and e-cigarette effects on smoking initiation and smoking cessation. Results If e-cigarettes increase individual-level smoking cessation probabilities by 20%, the model estimates a 6% reduction in smoking prevalence by 2060 compared to baseline model (no effects) outcomes. In contrast, e-cigarette use prevalence among never smokers would have to rise dramatically from current estimates, with e-cigarettes increasing smoking initiation by more than 200% relative to baseline model estimates in order to achieve a corresponding 6% increase in smoking prevalence by 2060. Conclusions Based on current knowledge of the patterns of e-cigarette use by smoking status and the heavy concentration of e-cigarette use among current smokers, the simulated effects of e-cigarettes on smoking cessation generate substantially larger changes to smoking prevalence relative to their effects on smoking initiation. PMID:27093020
Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.
Kanso, A; Chebbo, G; Tassin, B
2005-01-01
Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Mapping debris-flow hazard in Honolulu using a DEM
Ellen, Stephen D.; Mark, Robert K.; ,
1993-01-01
A method for mapping hazard posed by debris flows has been developed and applied to an area near Honolulu, Hawaii. The method uses studies of past debris flows to characterize sites of initiation, volume at initiation, and volume-change behavior during flow. Digital simulations of debris flows based on these characteristics are then routed through a digital elevation model (DEM) to estimate degree of hazard over the area.
Optimization of Microgrids at Military Remote Base Camps
2017-12-01
DOCUMENTATION PAGE Form Approved OMB No. 0704–0188 Public reporting burden for this collection of information is estimated to average 1 hour per... Form 298 (Rev. 2–89) Prescribed by ANSI Std. 239–18 i THIS PAGE INTENTIONALLY LEFT BLANK ii Approved for public release. Distribution is unlimited...Operational Energy Office Initiatives Quickly after establishment, the newly formed Operational Energy Office developed a list of initiatives aimed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuejun; Tang, Qiuhong; Liu, Xingcai
Real-time monitoring and predicting drought development with several months in advance is of critical importance for drought risk adaptation and mitigation. In this paper, we present a drought monitoring and seasonal forecasting framework based on the Variable Infiltration Capacity (VIC) hydrologic model over Southwest China (SW). The satellite precipitation data are used to force VIC model for near real-time estimate of land surface hydrologic conditions. As initialized with satellite-aided monitoring, the climate model-based forecast (CFSv2_VIC) and ensemble streamflow prediction (ESP)-based forecast (ESP_VIC) are both performed and evaluated through their ability in reproducing the evolution of the 2009/2010 severe drought overmore » SW. The results show that the satellite-aided monitoring is able to provide reasonable estimate of forecast initial conditions (ICs) in a real-time manner. Both of CFSv2_VIC and ESP_VIC exhibit comparable performance against the observation-based estimates for the first month, whereas the predictive skill largely drops beyond 1-month. Compared to ESP_VIC, CFSv2_VIC shows better performance as indicated by the smaller ensemble range. This study highlights the value of this operational framework in generating near real-time ICs and giving a reliable prediction with 1-month ahead, which has great implications for drought risk assessment, preparation and relief.« less
Aorta modeling with the element-based zero-stress state and isogeometric discretization
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi
2017-02-01
Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-08-31
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-01-01
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284
Wang, Hexiang; Barton, Justin E.; Schuster, Eugenio
2015-09-01
The accuracy of the internal states of a tokamak, which usually cannot be measured directly, is of crucial importance for feedback control of the plasma dynamics. A first-principles-driven plasma response model could provide an estimation of the internal states given the boundary conditions on the magnetic axis and at plasma boundary. However, the estimation would highly depend on initial conditions, which may not always be known, disturbances, and non-modeled dynamics. Here in this work, a closed-loop state observer for the poloidal magnetic flux is proposed based on a very limited set of real-time measurements by following an Extended Kalman Filteringmore » (EKF) approach. Comparisons between estimated and measured magnetic flux profiles are carried out for several discharges in the DIII-D tokamak. The experimental results illustrate the capability of the proposed observer in dealing with incorrect initial conditions and measurement noise.« less
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-09-13
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks.
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-01-01
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks. PMID:24064602
3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.
2015-01-01
3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722
Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M
2017-10-01
Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
NASA Astrophysics Data System (ADS)
Ding, Wenwu; Teferle, Norman; Kaźmierski, Kamil; Laurichesse, Denis; Yuan, Yunbin
2017-04-01
Observations from multiple Global Navigation Satellite System (GNSS) can improve the performance of real-time (RT) GNSS meteorology, in particular of the Zenith Total Delay (ZTD) estimates. RT ZTD estimates in combination with derived precipitable water vapour estimates can be used for weather now-casting and the tracking of severe weather events. While a number of published literature has already highlighted this positive development, in this study we describe an operational RT system for extracting ZTD using a modified version of the PPP-wizard (with PPP denoting Precise Point Positioning). Multi-GNSS, including GPS, GLONASS and Galileo, observation streams are processed using a RT PPP strategy based on RT satellite orbit and clock products from the Centre National d'Etudes Spatiales (CNES). A continuous experiment for 30 days was conducted, in which the RT observation streams of 20 globally distributed stations were processed. The initialization time and accuracy of the RT troposphere products using single and/or multi-system observations were evaluated. The effect of RT PPP ambiguity resolution was also evaluated. The results revealed that the RT troposphere products based on single system observations can fulfill the requirements of the meteorological application in now-casting systems. We noted that the GPS-only solution is better than the GLONASS-only solution in both initialization and accuracy. While the ZTD performance can be improved by applying RT PPP ambiguity resolution, the inclusion of observations from multiple GNSS has a more profound effect. Specifically, we saw that the ambiguity resolution is more effective in improving the accuracy, whereas the initialization process can be better accelerated by multi-GNSS observations. Combining all systems, RT troposphere products with an average accuracy of about 8 mm in ZTD were achieved after an initialization process of approximately 9 minutes, which supports the application of multi-GNSS observations and ambiguity resolution for RT meteorological applications.
Atkins, Michael; Coutinho, Anna D; Nunna, Sasikiran; Gupte-Singh, Komal; Eaddy, Michael
2018-02-01
The utilization of healthcare services and costs among patients with cancer is often estimated by the phase of care: initial, interim, or terminal. Although their durations are often set arbitrarily, we sought to establish data-driven phases of care using joinpoint regression in an advanced melanoma population as a case example. A retrospective claims database study was conducted to assess the costs of advanced melanoma from distant metastasis diagnosis to death during January 2010-September 2014. Joinpoint regression analysis was applied to identify the best-fitting points, where statistically significant changes in the trend of average monthly costs occurred. To identify the initial phase, average monthly costs were modeled from metastasis diagnosis to death; and were modeled backward from death to metastasis diagnosis for the terminal phase. Points of monthly cost trend inflection denoted ending and starting points. The months between represented the interim phase. A total of 1,671 patients with advanced melanoma who died met the eligibility criteria. Initial phase was identified as the 5-month period starting with diagnosis of metastasis, after which there was a sharp, significant decline in monthly cost trend (monthly percent change [MPC] = -13.0%; 95% CI = -16.9% to -8.8%). Terminal phase was defined as the 5-month period before death (MPC = -14.0%; 95% CI = -17.6% to -10.2%). The claims-based algorithm may under-estimate patients due to misclassifications, and may over-estimate terminal phase costs because hospital and emergency visits were used as a death proxy. Also, recently approved therapies were not included, which may under-estimate advanced melanoma costs. In this advanced melanoma population, optimal duration of the initial and terminal phases of care was 5 months immediately after diagnosis of metastasis and before death, respectively. Joinpoint regression can be used to provide data-supported phase of cancer care durations, but should be combined with clinical judgement.
Cost effectiveness of the Oregon quitline "free patch initiative".
Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John
2007-12-01
We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Parameter estimation in plasmonic QED
NASA Astrophysics Data System (ADS)
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Adkin, A; Brouwer, A; Downs, S H; Kelly, L
2016-01-01
The adoption of bovine tuberculosis (bTB) risk-based trading (RBT) schemes has the potential to reduce the risk of bTB spread. However, any scheme will have cost implications that need to be balanced against its likely success in reducing bTB. This paper describes the first stochastic quantitative model assessing the impact of the implementation of a cattle risk-based trading scheme to inform policy makers and contribute to cost-benefit analyses. A risk assessment for England and Wales was developed to estimate the number of infected cattle traded using historic movement data recorded between July 2010 and June 2011. Three scenarios were implemented: cattle traded with no RBT scheme in place, voluntary provision of the score and a compulsory, statutory scheme applying a bTB risk score to each farm. For each scenario, changes in trade were estimated due to provision of the risk score to potential purchasers. An estimated mean of 3981 bTB infected animals were sold to purchasers with no RBT scheme in place in one year, with 90% confidence the true value was between 2775 and 5288. This result is dependent on the estimated between herd prevalence used in the risk assessment which is uncertain. With the voluntary provision of the risk score by farmers, on average, 17% of movements was affected (purchaser did not wish to buy once the risk score was available), with a reduction of 23% in infected animals being purchased initially. The compulsory provision of the risk score in a statutory scheme resulted in an estimated mean change to 26% of movements, with a reduction of 37% in infected animals being purchased initially, increasing to a 53% reduction in infected movements from higher risk sellers (score 4 and 5). The estimated mean reduction in infected animals being purchased could be improved to 45% given a 10% reduction in risky purchase behaviour by farmers which may be achieved through education programmes, or to an estimated mean of 49% if a rule was implemented preventing farmers from the purchase of animals of higher risk than their own herd. Given voluntary trials currently taking place of a trading scheme, recommendations for future work include the monitoring of initial uptake and changes in the purchase patterns of farmers. Such data could be used to update the risk assessment to reduce uncertainty associated with model estimates. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Anastasio, Mark A.
2017-12-01
The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Initial Navigation Alignment of Optical Instruments on GOES-R
NASA Astrophysics Data System (ADS)
Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.
2016-12-01
The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.
Ding, Xiaorong; Zhang, Yuanting; Tsang, Hon Ki
2016-02-01
Continuous blood pressure (BP) measurement without a cuff is advantageous for the early detection and prevention of hypertension. The pulse transit time (PTT) method has proven to be promising for continuous cuffless BP measurement. However, the problem of accuracy is one of the most challenging aspects before the large-scale clinical application of this method. Since PTT-based BP estimation relies primarily on the relationship between PTT and BP under certain assumptions, estimation accuracy will be affected by cardiovascular disorders that impair this relationship and by the calibration frequency, which may violate these assumptions. This study sought to examine the impact of heart disease and the calibration interval on the accuracy of PTT-based BP estimation. The accuracy of a PTT-BP algorithm was investigated in 37 healthy subjects and 48 patients with heart disease at different calibration intervals, namely 15 min, 2 weeks, and 1 month after initial calibration. The results showed that the overall accuracy of systolic BP estimation was significantly lower in subjects with heart disease than in healthy subjects, but diastolic BP estimation was more accurate in patients than in healthy subjects. The accuracy of systolic and diastolic BP estimation becomes less reliable with longer calibration intervals. These findings demonstrate that both heart disease and the calibration interval can influence the accuracy of PTT-based BP estimation and should be taken into consideration to improve estimation accuracy.
New Theory for Tsunami Propagation and Estimation of Tsunami Source Parameters
NASA Astrophysics Data System (ADS)
Mindlin, I. M.
2007-12-01
In numerical studies based on the shallow water equations for tsunami propagation, vertical accelerations and velocities within the sea water are neglected, so a tsunami is usually supposed to be produced by an initial free surface displacement in the initially still sea. In the present work, new theory for tsunami propagation across the deep sea is discussed, that accounts for the vertical accelerations and velocities. The theory is based on the solutions for the water surface displacement obtained in [Mindlin I.M. Integrodifferential equations in dynamics of a heavy layered liquid. Moscow: Nauka*Fizmatlit, 1996 (Russian)]. The solutions are valid when horizontal dimensions of the initially disturbed area in the sea surface are much larger than the vertical displacement of the surface, which applies to the earthquake tsunamis. It is shown that any tsunami is a combination of specific basic waves found analytically (not superposition: the waves are nonlinear), and consequently, the tsunami source (i.e., the initially disturbed body of water) can be described by the numerable set of the parameters involved in the combination. Thus the problem of theoretical reconstruction of a tsunami source is reduced to the problem of estimation of the parameters. The tsunami source can be modelled approximately with the use of a finite number of the parameters. Two-parametric model is discussed thoroughly. A method is developed for estimation of the model's parameters using the arrival times of the tsunami at certain locations, the maximum wave-heights obtained from tide gauge records at the locations, and the distances between the earthquake's epicentre and each of the locations. In order to evaluate the practical use of the theory, four tsunamis of different magnitude occurred in Japan are considered. For each of the tsunamis, the tsunami energy (E below), the duration of the tsunami source formation T, the maximum water elevation in the wave originating area H, mean radius of the area R, and the average magnitude of the sea surface displacement at the margin of the wave originating area h are estimated using tide gauges records. The results are compared (and, in the author's opinion, are in line) with the estimates known in the literature. Compared to the methods employed in the literature, there is no need to use bathymetry (and, consequently, refraction diagrams) for the estimations. The present paper follows closely earlier works [Mindlin I.M., 1996; Mindlin I.M. J. Appl. Math. Phys. (ZAMP), 2004, vol.55, pp. 781-799] and adds to their theoretical results. Example. The Hiuganada earthquake of 1968, April, 1, 9h 42m JST. A tsunami of moderate size arrived at the coast of the south-western part of Shikoku and the eastern part of Kyushu, Japan. Tsunami parameters listed above are estimated with the theory being discussed for two models of tsunami generation: (a) by initial free surface displacement (the case for numerical studies): E=1.91· 1012J, R=22km, h=17.2cm; and (b) by a sudden change in the velocity field of initially still water: E=8.78· 1012J, R=20.4km, h=9.2cm. These values are in line with known estimates [Soloviev S.L., Go Ch.N. Catalogue of tsunami in the West of Pacific Ocean. Moscow, 1974]: E=1.3· 1013J (attributed to Hatori), E=(1.4 - 2.2)· 1012J (attributed to Aida), R=21.2km, h=20cm [Hatory T., Bull. Earthq. Res. Inst., Tokyo Univ., 1969, vol. 47, pp. 55-63]. Also, estimates are obtained for the values that could not be found based on shallow water wave theory: (a) H=3.43m and (b) H=1.38m, T=16.4sec.
Cox, Louis Anthony Tony
2006-12-01
This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.
Simultaneous head tissue conductivity and EEG source location estimation.
Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott
2016-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.
Simultaneous head tissue conductivity and EEG source location estimation
Acar, Can E.; Makeig, Scott
2015-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675
Willis, Henry H; LaTourrette, Tom
2008-04-01
This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI-L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk-reducing effectiveness of WHTI-L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI-L, and a range of casualty cost estimates based on the willingness-to-pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI-L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14-26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5-6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit-cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Wang, Xin; Wu, Linhui; Yi, Xi; Zhang, Yanqi; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2015-01-01
Due to both the physiological and morphological differences in the vascularization between healthy and diseased tissues, pharmacokinetic diffuse fluorescence tomography (DFT) can provide contrast-enhanced and comprehensive information for tumor diagnosis and staging. In this regime, the extended Kalman filtering (EKF) based method shows numerous advantages including accurate modeling, online estimation of multiparameters, and universal applicability to any optical fluorophore. Nevertheless the performance of the conventional EKF highly hinges on the exact and inaccessible prior knowledge about the initial values. To address the above issues, an adaptive-EKF scheme is proposed based on a two-compartmental model for the enhancement, which utilizes a variable forgetting-factor to compensate the inaccuracy of the initial states and emphasize the effect of the current data. It is demonstrated using two-dimensional simulative investigations on a circular domain that the proposed adaptive-EKF can obtain preferable estimation of the pharmacokinetic-rates to the conventional-EKF and the enhanced-EKF in terms of quantitativeness, noise robustness, and initialization independence. Further three-dimensional numerical experiments on a digital mouse model validate the efficacy of the method as applied in realistic biological systems.
NASA Astrophysics Data System (ADS)
Ma, Hongliang; Xu, Shijie
2014-09-01
This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.
Cowell, Alexander J; Zarkin, Gary A; Wedehase, Brendan J; Lerch, Jennifer; Walters, Scott T; Taxman, Faye S
2018-04-01
Although substance use is common among probationers in the United States, treatment initiation remains an ongoing problem. Among the explanations for low treatment initiation are that probationers are insufficiently motivated to seek treatment, and that probation staff have insufficient training and resources to use evidence-based strategies such as motivational interviewing. A web-based intervention based on motivational enhancement principles may address some of the challenges of initiating treatment but has not been tested to date in probation settings. The current study evaluated the cost-effectiveness of a computerized intervention, Motivational Assessment Program to Initiate Treatment (MAPIT), relative to face-to-face Motivational Interviewing (MI) and supervision as usual (SAU), delivered at the outset of probation. The intervention took place in probation departments in two U.S. cities. The baseline sample comprised 316 participants (MAPIT = 104, MI = 103, and SAU = 109), 90% (n = 285) of whom completed the 6-month follow-up. Costs were estimated from study records and time logs kept by interventionists. The effectiveness outcome was self-reported initiation into any treatment (formal or informal) within 2 and 6 months of the baseline interview. The cost-effectiveness analysis involved assessing dominance and computing incremental cost-effectiveness ratios and cost-effectiveness acceptability curves. Implementation costs were used in the base case of the cost-effectiveness analysis, which excludes both a hypothetical license fee to recoup development costs and startup costs. An intent-to-treat approach was taken. MAPIT cost $79.37 per participant, which was ~$55 lower than the MI cost of $134.27 per participant. Appointment reminders comprised a large proportion of the cost of the MAPIT and MI intervention arms. In the base case, relative to SAU, MAPIT cost $6.70 per percentage point increase in the probability of initiating treatment. If a decision-maker is willing to pay $15 or more to improve the probability of initiating treatment by 1%, estimates suggest she can be 70% confident that MAPIT is good value relative to SAU at the 2-month follow-up and 90% confident that MAPIT is good value at the 6-month follow-up. Web-based MAPIT may be good value compared to in-person delivered alternatives. This conclusion is qualified because the results are not robust to narrowing the outcome to initiating formal treatment only. Further work should explore ways to improve access to efficacious treatment in probation settings. Copyright © 2018 Elsevier Inc. All rights reserved.
Costs of cancer care in children and adolescents in Ontario, Canada.
de Oliveira, Claire; Bremner, Karen E; Liu, Ning; Greenberg, Mark L; Nathan, Paul C; McBride, Mary L; Krahn, Murray D
2017-11-01
Cancer in children and adolescents presents unique issues regarding treatment and survivorship, but few studies have measured economic burden. We estimated health care costs by phase of cancer care, from the public payer perspective, in population-based cohorts. Children newly diagnosed at ages 0 days-14.9 years and adolescents newly diagnosed at 15-19.9 years, from January 1, 1995 to June 30, 2010, were identified from Ontario cancer registries, and each matched to three noncancer controls. Data were linked with administrative records describing resource use for cancer and other health care. Total and net (patients minus controls) resource-specific costs ($CAD2012) were estimated using generalized estimating equations for four phases of care: prediagnosis (60 days), initial (360 days), continuing (variable), final (360 days). Mean ages at diagnosis were 6 years for children (N = 4,606) and 17 years for adolescents (N = 2,443). Mean net prediagnosis phase 60-day costs were $6,177 for children and $1,018 for adolescents. Costs for initial, continuing, and final phases were $138,161, $15,756, and $316,303 per 360 days for children, and $62,919, $7,071, and $242,008 for adolescents. The highest initial phase costs were for leukemia patients ($156,225 per 360 days for children and $171,275 for adolescents). The final phase was the most costly ($316,303 per 360 days for children and $242,008 for adolescents). Costs for children with cancer are much higher than for adolescents and much higher than those reported in adults. Comprehensive population-based long-term estimates of cancer costs are useful for health services planning and cost-effectiveness analysis. © 2017 Wiley Periodicals, Inc.
Zielinski, R.A.; Otton, J.K.; Budahn, J.R.
2001-01-01
Radium-bearing barite (radiobarite) is a common constituent of scale and sludge deposits that form in oil-field production equipment. The barite forms as a precipitate from radium-bearing, saline formation water that is pumped to the surface along with oil. Radioactivity levels in some oil-field equipment and in soils contaminated by scale and sludge can be sufficiently high to pose a potential health threat. Accurate determinations of radium isotopes (226Ra+228Ra) in soils are required to establish the level of soil contamination and the volume of soil that may exceed regulatory limits for total radium content. In this study the radium isotopic data are used to provide estimates of the age of formation of the radiobarite contaminant. Age estimates require that highly insoluble radiobarite approximates a chemically closed system from the time of its formation. Age estimates are based on the decay of short-lived 228Ra (half-life=5.76 years) compared to 226Ra (half-life=1600 years). Present activity ratios of 228Ra/226Ra in radiobarite-rich scale or highly contaminated soil are compared to initial ratios at the time of radiobarite precipitation. Initial ratios are estimated by measurements of saline water or recent barite precipitates at the site or by considering a range of probable initial ratios based on reported values in modern oil-field brines. At sites that contain two distinct radiobarite sources of different age, the soils containing mixtures of sources can be identified, and mixing proportions quantified using radium concentration and isotopic data. These uses of radium isotope data provide more description of contamination history and can possibly address liability issues. Copyright ?? 2000 .
van Stralen, Marijn; Bosch, Johan G; Voormolen, Marco M; van Burken, Gerard; Krenning, Boudewijn J; van Geuns, Robert-Jan M; Lancée, Charles T; de Jong, Nico; Reiber, Johan H C
2005-10-01
We propose a semiautomatic endocardial border detection method for three-dimensional (3D) time series of cardiac ultrasound (US) data based on pattern matching and dynamic programming, operating on two-dimensional (2D) slices of the 3D plus time data, for the estimation of full cycle left ventricular volume, with minimal user interaction. The presented method is generally applicable to 3D US data and evaluated on data acquired with the Fast Rotating Ultrasound (FRU-) Transducer, developed by Erasmus Medical Center (Rotterdam, the Netherlands), a conventional phased-array transducer, rotating at very high speed around its image axis. The detection is based on endocardial edge pattern matching using dynamic programming, which is constrained by a 3D plus time shape model. It is applied to an automatically selected subset of 2D images of the original data set, for typically 10 equidistant rotation angles and 16 cardiac phases (160 images). Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastole and end-systole volumes. Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastolic (ED) and end-systolic (ES) volumes. The semiautomatic border detection approach shows good correlations with MRI ED/ES volumes (r = 0.938) and low interobserver variability (y = 1.005x - 16.7, r = 0.943) over full-cycle volume estimations. It shows a high consistency in tracking the user-defined initial borders over space and time. We show that the ease of the acquisition using the FRU-transducer and the semiautomatic endocardial border detection method together can provide a way to quickly estimate the left ventricular volume over the full cardiac cycle using little user interaction.
Stroke as the Initial Manifestation of Atrial Fibrillation: The Framingham Heart Study.
Lubitz, Steven A; Yin, Xiaoyan; McManus, David D; Weng, Lu-Chen; Aparicio, Hugo J; Walkey, Allan J; Rafael Romero, Jose; Kase, Carlos S; Ellinor, Patrick T; Wolf, Philip A; Seshadri, Sudha; Benjamin, Emelia J
2017-02-01
To prevent strokes that may occur as the first manifestation of atrial fibrillation (AF), screening programs have been proposed to identify patients with undiagnosed AF who may be eligible for treatment with anticoagulation. However, the frequency with which patients with AF present with stroke as the initial manifestation of the arrhythmia is unknown. We estimated the frequency with which AF may present as a stroke in 1809 community-based Framingham Heart Study participants with first-detected AF and without previous strokes, by tabulating the frequencies of strokes occurring on the same day, within 30 days before, 90 days before, and 365 days before first-detected AF. Using previously reported AF incidence rates, we estimated the incidence of strokes that may represent the initial manifestation of AF. We observed 87 strokes that occurred ≤1 year before AF detection, corresponding to 1.7% on the same day, 3.4% within 30 days before, 3.7% within 90 days before, and 4.8% ≤1 year before AF detection. We estimated that strokes may present as the initial manifestation of AF at a rate of 2 to 5 per 10 000 person-years, in both men and women. We observed that stroke is an uncommon but measureable presenting feature of AF. Our data imply that emphasizing cost-effectiveness of population-wide AF-screening efforts will be important given the relative infrequency with which stroke represents the initial manifestation of AF. © 2017 American Heart Association, Inc.
Estimating Development Cost of an Interactive Website Based Cancer Screening Promotion Program
Lairson, David R.; Chung, Tong Han; Smith, Lisa G.; Springston, Jeffrey K.; Champion, Victoria L.
2015-01-01
Objectives The aim of this study was to estimate the initial development costs for an innovative talk show format tailored intervention delivered via the interactive web, for increasing cancer screening in women 50 to 75 who were non-adherent to screening guidelines for colorectal cancer and/or breast cancer. Methods The cost of the intervention development was estimated from a societal perspective. Micro costing methods plus vendor contract costs were used to estimate cost. Staff logs were used to track personnel time. Non-personnel costs include all additional resources used to produce the intervention. Results Development cost of the interactive web based intervention was $.39 million, of which 77% was direct cost. About 98% of the cost was incurred in personnel time cost, contract cost and overhead cost. Conclusions The new web-based disease prevention medium required substantial investment in health promotion and media specialist time. The development cost was primarily driven by the high level of human capital required. The cost of intervention development is important information for assessing and planning future public and private investments in web-based health promotion interventions. PMID:25749548
Vision-Based SLAM System for Unmanned Aerial Vehicles
Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni
2016-01-01
The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131
Model-Based, Noninvasive Monitoring of Intracranial Pressure
2012-10-01
nICP) estimate requires simultaneous measurement of the waveforms of arterial blood pressure ( ABP ), obtained via radial artery catheter or finger...initial database comprises subarachnoid hemorrhage patients in neuro-intensive care at our partner hospital, for whom ICP, ABP and CBFV are currently
Numerical Simulation of Stress evolution and earthquake sequence of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Dong, Peiyu; Hu, Caibo; Shi, Yaolin
2015-04-01
The India-Eurasia's collision produces N-S compression and results in large thrust fault in the southern edge of the Tibetan Plateau. Differential eastern flow of the lower crust of the plateau leads to large strike-slip faults and normal faults within the plateau. From 1904 to 2014, more than 30 earthquakes of Mw > 6.5 occurred sequentially in this distinctive tectonic environment. How did the stresses evolve during the last 110 years, how did the earthquakes interact with each other? Can this knowledge help us to forecast the future seismic hazards? In this essay, we tried to simulate the evolution of the stress field and the earthquake sequence in the Tibetan plateau within the last 110 years with a 2-D finite element model. Given an initial state of stress, the boundary condition was constrained by the present-day GPS observation, which was assumed as a constant rate during the 110 years. We calculated stress evolution year by year, and earthquake would occur if stress exceed the crustal strength. Stress changes due to each large earthquake in the sequence was calculated and contributed to the stress evolution. A key issue is the choice of initial stress state of the modeling, which is actually unknown. Usually, in the study of earthquake triggering, people assume the initial stress is zero, and only calculate the stress changes by large earthquakes - the Coulomb failure stress changes (Δ CFS). To some extent, this simplified method is a powerful tool because it can reveal which fault or which part of a fault becomes more risky or safer relatively. Nonetheless, it has not utilized all information available to us. The earthquake sequence reveals, though far from complete, some information about the stress state in the region. If the entire region is close to a self-organized critical or subcritical state, earthquake stress drop provides an estimate of lower limit of initial state. For locations no earthquakes occurred during the period, initial stress has to be lower than certain value. For locations where large earthquakes occurred during the 110 years, the initial stresses can be inverted if the strength is estimated and the tectonic loading is assumed constant. Therefore, although initial stress state is unknown, we can try to make estimate of a range of it. In this study, we estimated a reasonable range of initial stress, and then based on Coulomb-Mohr criterion to regenerate the earthquake sequence, starting from the Daofu earthquake of 1904. We calculated the stress field evolution of the sequence, considering both the tectonic loading and interaction between the earthquakes. Ultimately we got a sketch of the present stress. Of course, a single model with certain initial stress is just one possible model. Consequently the potential seismic hazards distribution based on a single model is not convincing. We made test on hundreds of possible initial stress state, all of them can produce the historical earthquake sequence occurred, and summarized all kinds of calculated probabilities of the future seismic activity. Although we cannot provide the exact state in the future, but we can narrow the estimate of regions where is in high probability of risk. Our primary results indicate that the Xianshuihe fault and adjacent area is one of such zones with higher risk than other regions in the future. During 2014, there were 6 earthquakes (M > 5.0) happened in this region, which correspond with our result in some degree. We emphasized the importance of the initial stress field for the earthquake sequence, and provided a probabilistic assessment for future seismic hazards. This study may bring some new insights to estimate the initial stress, earthquake triggering, and the stress field evolution .
Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun
2015-11-01
New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.
Economic analysis of the global polio eradication initiative.
Duintjer Tebbens, Radboud J; Pallansch, Mark A; Cochi, Stephen L; Wassilak, Steven G F; Linkins, Jennifer; Sutter, Roland W; Aylward, R Bruce; Thompson, Kimberly M
2010-12-16
The global polio eradication initiative (GPEI), which started in 1988, represents the single largest, internationally coordinated public health project to date. Completion remains within reach, with type 2 wild polioviruses apparently eradicated since 1999 and fewer than 2000 annual paralytic poliomyelitis cases of wild types 1 and 3 reported since then. This economic analysis of the GPEI reflects the status of the program as of February 2010, including full consideration of post-eradication policies. For the GPEI intervention, we consider the actual pre-eradication experience to date followed by two distinct potential future post-eradication vaccination policies. We estimate GPEI costs based on actual and projected expenditures and poliomyelitis incidence using reported numbers corrected for underreporting and model projections. For the comparator, which assumes only routine vaccination for polio historically and into the future (i.e., no GPEI), we estimate poliomyelitis incidence using a dynamic infection transmission model and costs based on numbers of vaccinated children. Cost-effectiveness ratios for the GPEI vs. only routine vaccination qualify as highly cost-effective based on standard criteria. We estimate incremental net benefits of the GPEI between 1988 and 2035 of approximately 40-50 billion dollars (2008 US dollars; 1988 net present values). Despite the high costs of achieving eradication in low-income countries, low-income countries account for approximately 85% of the total net benefits generated by the GPEI in the base case analysis. The total economic costs saved per prevented paralytic poliomyelitis case drive the incremental net benefits, which become positive even if we estimate the loss in productivity as a result of disability as below the recommended value of one year in average per-capita gross national income per disability-adjusted life year saved. Sensitivity analysis suggests that the finding of positive net benefits of the GPEI remains robust over a wide range of assumptions, and that consideration of the additional net benefits of externalities that occurred during polio campaigns to date, such as the mortality reduction associated with delivery of Vitamin A supplements, significantly increases the net benefits. This study finds a strong economic justification for the GPEI despite the rising costs of the initiative. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2009-01-01
A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.
A Weighted Least Squares Approach To Robustify Least Squares Estimates.
ERIC Educational Resources Information Center
Lin, Chowhong; Davenport, Ernest C., Jr.
This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…
Proof of Concept for an Approach to a Finer Resolution Inventory
Chris J. Cieszewski; Kim Iles; Roger C. Lowe; Michal Zasada
2005-01-01
This report presents a proof of concept for a statistical framework to develop a timely, accurate, and unbiased fiber supply assessment in the State of Georgia, U.S.A. The proposed approach is based on using various data sources and modeling techniques to calibrate satellite image-based statewide stand lists, which provide initial estimates for a State inventory on a...
An alternative method for estimating crown characteristics of urban trees using digital photographs
Matthew F. Winn; Philip A. Araman
2012-01-01
The USDA Forest Service Forest Inventory and Analysis (FIA) program has concluded that statewide urban forest inventories are feasible based on a series of pilot studies initiated in 2001. However, much of the tree crown data collected during inventories are based on visual inspection and therefore highly subjective. In order to objectively determine the crown...
NASA Technical Reports Server (NTRS)
Cole, Stuart K.; Reeves, John D.; Williams-Byrd, Julie A.; Greenberg, Marc; Comstock, Doug; Olds, John R.; Wallace, Jon; DePasquale, Dominic; Schaffer, Mark
2013-01-01
NASA is investing in new technologies that include 14 primary technology roadmap areas, and aeronautics. Understanding the cost for research and development of these technologies and the time it takes to increase the maturity of the technology is important to the support of the ongoing and future NASA missions. Overall, technology estimating may help provide guidance to technology investment strategies to help improve evaluation of technology affordability, and aid in decision support. The research provides a summary of the framework development of a Technology Estimating process where four technology roadmap areas were selected to be studied. The framework includes definition of terms, discussion for narrowing the focus from 14 NASA Technology Roadmap areas to four, and further refinement to include technologies, TRL range of 2 to 6. Included in this paper is a discussion to address the evaluation of 20 unique technology parameters that were initially identified, evaluated and then subsequently reduced for use in characterizing these technologies. A discussion of data acquisition effort and criteria established for data quality are provided. The findings obtained during the research included gaps identified, and a description of a spreadsheet-based estimating tool initiated as a part of the Technology Estimating process.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
Association between GFR Estimated by Multiple Methods at Dialysis Commencement and Patient Survival
Wong, Muh Geot; Pollock, Carol A.; Cooper, Bruce A.; Branley, Pauline; Collins, John F.; Craig, Jonathan C.; Kesselhut, Joan; Luxton, Grant; Pilmore, Andrew; Harris, David C.
2014-01-01
Summary Background and objectives The Initiating Dialysis Early and Late study showed that planned early or late initiation of dialysis, based on the Cockcroft and Gault estimation of GFR, was associated with identical clinical outcomes. This study examined the association of all-cause mortality with estimated GFR at dialysis commencement, which was determined using multiple formulas. Design, setting, participants, & measurements Initiating Dialysis Early and Late trial participants were stratified into tertiles according to the estimated GFR measured by Cockcroft and Gault, Modification of Diet in Renal Disease, or Chronic Kidney Disease-Epidemiology Collaboration formula at dialysis commencement. Patient survival was determined using multivariable Cox proportional hazards model regression. Results Only Initiating Dialysis Early and Late trial participants who commenced on dialysis were included in this study (n=768). A total of 275 patients died during the study. After adjustment for age, sex, racial origin, body mass index, diabetes, and cardiovascular disease, no significant differences in survival were observed between estimated GFR tertiles determined by Cockcroft and Gault (lowest tertile adjusted hazard ratio, 1.11; 95% confidence interval, 0.82 to 1.49; middle tertile hazard ratio, 1.29; 95% confidence interval, 0.96 to 1.74; highest tertile reference), Modification of Diet in Renal Disease (lowest tertile hazard ratio, 0.88; 95% confidence interval, 0.63 to 1.24; middle tertile hazard ratio, 1.20; 95% confidence interval, 0.90 to 1.61; highest tertile reference), and Chronic Kidney Disease-Epidemiology Collaboration equations (lowest tertile hazard ratio, 0.93; 95% confidence interval, 0.67 to 1.27; middle tertile hazard ratio, 1.15; 95% confidence interval, 0.86 to 1.54; highest tertile reference). Conclusion Estimated GFR at dialysis commencement was not significantly associated with patient survival, regardless of the formula used. However, a clinically important association cannot be excluded, because observed confidence intervals were wide. PMID:24178976
NASA Astrophysics Data System (ADS)
Gar Alalm, Mohamed; Tawfik, Ahmed; Ookawara, Shinichi
2017-03-01
In this study, solar photo-Fenton reaction using compound parabolic collectors reactor was assessed for removal of phenol from aqueous solution. The effect of irradiation time, initial concentration, initial pH, and dosage of Fenton reagent were investigated. H2O2 and aromatic intermediates (catechol, benzoquinone, and hydroquinone) were quantified during the reaction to study the pathways of the oxidation process. Complete degradation of phenol was achieved after 45 min of irradiation when the initial concentration was 100 mg/L. However, increasing the initial concentration up to 500 mg/L inhibited the degradation efficiency. The dosage of H2O2 and Fe+2 significantly affected the degradation efficiency of phenol. The observed optimum pH for the reaction was 3.1. Phenol degradation at different concentration was fitted to the pseudo-first order kinetic according to Langmuir-Hinshelwood model. Costs estimation for a large scale reactor based was performed. The total costs of the best economic condition with maximum degradation of phenol are 2.54 €/m3.
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.
2009-01-01
Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.
1980-11-01
Dela Bnrted) Item 19 Continued: system design design handbooks maintenance manpower simulation de’ision options cost estimating relationships prediction...determine the extent to which human resources data (HRD) are used in early system design. The third was to assess the availability and ade - quacy of...relationships, regression analysis, comparability analysis, expected value techniques) to provide initial data values in the very early stages of weapon system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Z.; Klann, R. T.; Nuclear Engineering Division
2007-08-03
An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.
[WebSurvCa: web-based estimation of death and survival probabilities in a cohort].
Clèries, Ramon; Ameijide, Alberto; Buxó, Maria; Vilardell, Mireia; Martínez, José Miguel; Alarcón, Francisco; Cordero, David; Díez-Villanueva, Ana; Yasui, Yutaka; Marcos-Gragera, Rafael; Vilardell, Maria Loreto; Carulla, Marià; Galceran, Jaume; Izquierdo, Ángel; Moreno, Víctor; Borràs, Josep M
2018-01-19
Relative survival has been used as a measure of the temporal evolution of the excess risk of death of a cohort of patients diagnosed with cancer, taking into account the mortality of a reference population. Once the excess risk of death has been estimated, three probabilities can be computed at time T: 1) the crude probability of death associated with the cause of initial diagnosis (disease under study), 2) the crude probability of death associated with other causes, and 3) the probability of absolute survival in the cohort at time T. This paper presents the WebSurvCa application (https://shiny.snpstats.net/WebSurvCa/), whereby hospital-based and population-based cancer registries and registries of other diseases can estimate such probabilities in their cohorts by selecting the mortality of the relevant region (reference population). Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Cua, G.; Fischer, M.; Heaton, T.; Wiemer, S.
2009-04-01
The Virtual Seismologist (VS) algorithm is a Bayesian approach to regional, network-based earthquake early warning (EEW). Bayes' theorem as applied in the VS algorithm states that the most probable source estimates at any given time is a combination of contributions from relatively static prior information that does not change over the timescale of earthquake rupture and a likelihood function that evolves with time to take into account incoming pick and amplitude observations from the on-going earthquake. Potentially useful types of prior information include network topology or station health status, regional hazard maps, earthquake forecasts, and the Gutenberg-Richter magnitude-frequency relationship. The VS codes provide magnitude and location estimates once picks are available at 4 stations; these source estimates are subsequently updated each second. The algorithm predicts the geographical distribution of peak ground acceleration and velocity using the estimated magnitude and location and appropriate ground motion prediction equations; the peak ground motion estimates are also updated each second. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS method is one of three EEW algorithms whose real-time performance is being evaluated and tested by the California Integrated Seismic Network (CISN) EEW project. A crucial component of operational EEW algorithms is the ability to distinguish between noise and earthquake-related signals in real-time. We discuss various empirical approaches that allow the VS algorithm to operate in the presence of noise. Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008. On average, the VS algorithm provides initial magnitude, location, origin time, and ground motion distribution estimates within 17 seconds of the earthquake origin time. These initial estimate times are dominated by the time for 4 acceptable picks to be available, and thus are heavily influenced by the station density in a given region; these initial estimate times also include the effects of telemetry delay, which ranges between 6 and 15 seconds at the SCSN, and processing time (~1 second). Other relevant performance statistics include: 95% of initial real-time location estimates are within 20 km of the actual epicenter, 97% of initial real-time magnitude estimates are within one magnitude unit of the network magnitude. Extension of real-time VS operations to networks in Northern California is an on-going effort. In Switzerland, the VS codes have been run on offline waveform data from over 125 earthquakes recorded by the Swiss Digital Seismic Network (SDSN) and the Swiss Strong Motion Network (SSMS). We discuss the performance of the VS algorithm on these datasets in terms of magnitude, location, and ground motion estimation.
Autonomous navigation system based on GPS and magnetometer data
NASA Technical Reports Server (NTRS)
Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)
2004-01-01
This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
Support to LANL: Cost estimation. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report summarizes the activities and progress by ICF Kaiser Engineers conducted on behalf of Los Alamos National Laboratories (LANL) for the US Department of Energy, Office of Waste Management (EM-33) in the area of improving methods for Cost Estimation. This work was conducted between October 1, 1992 and September 30, 1993. ICF Kaiser Engineers supported LANL in providing the Office of Waste Management with planning and document preparation services for a Cost and Schedule Estimating Guide (Guide). The intent of the Guide was to use Activity-Based Cost (ABC) estimation as a basic method in preparing cost estimates for DOEmore » planning and budgeting documents, including Activity Data Sheets (ADSs), which form the basis for the Five Year Plan document. Prior to the initiation of the present contract with LANL, ICF Kaiser Engineers was tasked to initiate planning efforts directed toward a Guide. This work, accomplished from June to September, 1992, included visits to eight DOE field offices and consultation with DOE Headquarters staff to determine the need for a Guide, the desired contents of a Guide, and the types of ABC estimation methods and documentation requirements that would be compatible with current or potential practices and expertise in existence at DOE field offices and their contractors.« less
Blood flow estimation in gastroscopic true-color images
NASA Astrophysics Data System (ADS)
Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans
1995-05-01
The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.
Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.
2017-12-01
We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.
Cost analysis for the implementation of a medication review with follow-up service in Spain.
Noain, Aranzazu; Garcia-Cardenas, Victoria; Gastelurrutia, Miguel Angel; Malet-Larrea, Amaia; Martinez-Martinez, Fernando; Sabater-Hernandez, Daniel; Benrimoj, Shalom I
2017-08-01
Background Medication review with follow-up (MRF) is a professional pharmacy service proven to be cost-effective. Its broader implementation is limited, mainly due to the lack of evidence-based implementation programs that include economic and financial analysis. Objective To analyse the costs and estimate the price of providing and implementing MRF. Setting Community pharmacy in Spain. Method Elderly patients using poly-pharmacy received a community pharmacist-led MRF for 6 months. The cost analysis was based on the time-driven activity based costing model and included the provider costs, initial investment costs and maintenance expenses. The service price was estimated using the labour costs, costs associated with service provision, potential number of patients receiving the service and mark-up. Main outcome measures Costs and potential price of MRF. Results A mean time of 404.4 (SD 232.2) was spent on service provision and was extrapolated to annual costs. Service provider cost per patient ranged from €196 (SD 90.5) to €310 (SD 164.4). The mean initial investment per pharmacy was €4594 and the mean annual maintenance costs €3,068. Largest items contributing to cost were initial staff training, continuing education and renting of the patient counselling area. The potential service price ranged from €237 to €628 per patient a year. Conclusion Time spent by the service provider accounted for 75-95% of the final cost, followed by initial investment costs and maintenance costs. Remuneration for professional pharmacy services provision must cover service costs and appropriate profit, allowing for their long-term sustainability.
Pilot testing of SHRP 2 reliability data and analytical products: Florida. [supporting datasets
DOT National Transportation Integrated Search
2014-01-01
SHRP 2 initiated the L38 project to pilot test products from five of the programs completed projects. The products support reliability estimation and use based on data analyses, analytical techniques, and decision-making framework. The L38 project...
Forensic individual age estimation with DNA: From initial approaches to methylation tests.
Freire-Aradas, A; Phillips, C; Lareu, M V
2017-07-01
Individual age estimation is a key factor in forensic science analysis that can provide very useful information applicable to criminal, legal, and anthropological investigations. Forensic age inference was initially based on morphological inspection or radiography and only later began to adopt molecular approaches. However, a lack of accuracy or technical problems hampered the introduction of these DNA-based methodologies in casework analysis. A turning point occurred when the epigenetic signature of DNA methylation was observed to gradually change during an individual´s lifespan. In the last four years, the number of publications reporting DNA methylation age-correlated changes has gradually risen and the forensic community now has a range of age methylation tests applicable to forensic casework. Most forensic age predictor models have been developed based on blood DNA samples, but additional tissues are now also being explored. This review assesses the most widely adopted genes harboring methylation sites, detection technologies, statistical age-predictive analyses, and potential causes of variation in age estimates. Despite the need for further work to improve predictive accuracy and establishing a broader range of tissues for which tests can analyze the most appropriate methylation sites, several forensic age predictors have now been reported that provide consistency in their prediction accuracies (predictive error of ±4 years); this makes them compelling tools with the potential to contribute key information to help guide criminal investigations. Copyright © 2017 Central Police University.
Automated external defibrillators in schools?
Cornelis, Charlotte; Calle, Paul; Mpotos, Nicolas; Monsieurs, Koenraad
2015-06-01
Automated external defibrillators (AEDs) placed in public locations can save lives of cardiac arrest victims. In this paper, we try to estimate the cost-effectiveness of AED placement in Belgian schools. This would allow school policy makers to make an evidence-based decision about an on-site AED project. We developed a simple mathematical model containing literature data on the incidence of cardiac arrest with a shockable rhythm; the feasibility and effectiveness of defibrillation by on-site AEDs and the survival benefit. This was coupled to a rough estimation of the minimal costs to initiate an AED project. According to the model described above, AED projects in all Belgian schools may save 5 patients annually. A rough estimate of the minimal costs to initiate an AED project is 660 EUR per year. As there are about 6000 schools in Belgium, a national AED project in all schools would imply an annual cost of at least 3960 000 EUR, resulting in 5 lives saved. As our literature survey shows that AED use in schools is feasible and effective, the placement of these devices in all Belgian schools is undoubtedly to be considered. The major counter-arguments are the very low incidence and the high costs to set up a school-based AED programme. Our review may fuel the discussion about Whether or not school-based AED projects represent good value for money and should be preferred above other health care interventions.
NASA Technical Reports Server (NTRS)
Kuhn, Richard E.; Bellavia, David C.; Corsiglia, Victor R.; Wardwell, Douglas A.
1991-01-01
Currently available methods for estimating the net suckdown induced on jet V/STOL aircraft hovering in ground effect are based on a correlation of available force data and are, therefore, limited to configurations similar to those in the data base. Experience with some of these configurations has shown that both the fountain lift and additional suckdown are overestimated but these effects cancel each other for configurations within the data base. For other configurations, these effects may not cancel and the net suckdown could be grossly overestimated or underestimated. Also, present methods do not include the prediction of the pitching moments associated with the suckdown induced in ground effect. An attempt to develop a more logically based method for estimating the fountain lift and suckdown based on the jet-induced pressures is initiated. The analysis is based primarily on the data from a related family of three two-jet configurations (all using the same jet spacing) and limited data from two other two-jet configurations. The current status of the method, which includes expressions for estimating the maximum pressure induced in the fountain regions, and the sizes of the fountain and suckdown regions is presented. Correlating factors are developed to be used with these areas and pressures to estimate the fountain lift, the suckdown, and the related pitching moment increments.
Radical-initiated controlled synthesis of homo- and copolymers based on acrylonitrile
NASA Astrophysics Data System (ADS)
Grishin, D. F.; Grishin, I. D.
2015-07-01
Data on the controlled synthesis of polyacrylonitrile and acrylonitrile copolymers with other (meth)acrylic and vinyl monomers upon radical initiation and metal complex catalysis are analyzed. Primary attention is given to the use of metal complexes for the synthesis of acrylonitrile-based (co)polymers with defined molecular weight and polydispersity in living mode by atom transfer radical polymerization. The prospects for using known methods of controlled synthesis of macromolecules for the preparation of acrylonitrile homo- and copolymers as carbon fibre precursors are estimated. The major array of published data analyzed in the review refers to the last decade. The bibliography includes 175 references.
Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.
Hui, Zhuo; Sankaranarayanan, Aswin C
2017-10-01
This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.
Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A
2018-06-01
Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A fusion approach for coarse-to-fine target recognition
NASA Astrophysics Data System (ADS)
Folkesson, Martin; Grönwall, Christina; Jungert, Erland
2006-04-01
A fusion approach in a query based information system is presented. The system is designed for querying multimedia data bases, and here applied to target recognition using heterogeneous data sources. The recognition process is coarse-to-fine, with an initial attribute estimation step and a following matching step. Several sensor types and algorithms are involved in each of these two steps. An independence of the matching results, on the origin of the estimation results, is observed. It allows for distribution of data between algorithms in an intermediate fusion step, without risk of data incest. This increases the overall chance of recognising the target. An implementation of the system is described.
An interactive program for pharmacokinetic modeling.
Lu, D R; Mao, F
1993-05-01
A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
NASA Astrophysics Data System (ADS)
Koshimizu, K.; Uchida, T.
2015-12-01
Initial large-scale sediment yield caused by heavy rainfall or major storms have made a strong impression on us. Previous studies focusing on landslide management investigated the initial sediment movement and its mechanism. However, integrated management of catchment-scale sediment movements requires estimating the sediment yield, which is produced by the subsequent expanded landslides due to rainfall, in addition to the initial landslide movement. This study presents a quantitative analysis of expanded landslides by surveying the Shukushubetsu River basin, at the foot of the Hidaka mountain range in central Hokkaido, Japan. This area recorded heavy rainfall in 2003, reaching a maximum daily precipitation of 388 mm. We extracted the expanded landslides from 2003 to 2008 using aerial photographs taken over the river area. In particular, we calculated the probability of expansion for each landslide, the ratio of the landslide area in 2008 as compared with that in 2003, and the amount of the expanded landslide area corresponding to the initial landslide area. As a result, it is estimated 24% about probability of expansion for each landslide. In addition, each expanded landslide area is smaller than the initial landslide area. Furthermore, the amount of each expanded landslide area in 2008 is approximately 7% of their landslide area in 2003. Therefore, the sediment yield from subsequent expanded landslides is equal to or slightly greater than the sediment yield in a typical base flow. Thus, we concluded that the amount of sediment yield from subsequent expanded landslides is lower than that of initial large-scale sediment yield caused by a heavy rainfall in terms of effect on management of catchment-scale sediment movement.
1983-12-01
8217°%. .. o..’% - * 2’ . *. -o- . *o.oo o ,o ;j ’:-’ List of Figures Figure Page 1. System Identification of the Aerothermodynamic Environment of... System (STS) has of fered the engineering community a unique opportunity to flight test a reentry, hypersonic vehicle. The key 4 to the Shuttle’s...of the system (Refs. 7,8,9,10). Although the initial test flights have now been completed, data analysis and expansion of the existing data base
Initial alignment method for free space optics laser beam
NASA Astrophysics Data System (ADS)
Shimada, Yuta; Tashiro, Yuki; Izumi, Kiyotaka; Yoshida, Koichi; Tsujimura, Takeshi
2016-08-01
The authors have newly proposed and constructed an active free space optics transmission system. It is equipped with a motor driven laser emitting mechanism and positioning photodiodes, and it transmits a collimated thin laser beam and accurately steers the laser beam direction. It is necessary to introduce the laser beam within sensible range of the receiver in advance of laser beam tracking control. This paper studies an estimation method of laser reaching point for initial laser beam alignment. Distributed photodiodes detect laser luminescence at respective position, and the optical axis of laser beam is analytically presumed based on the Gaussian beam optics. Computer simulation evaluates the accuracy of the proposed estimation methods, and results disclose that the methods help us to guide the laser beam to a distant receiver.
Application of nonlinear models to estimate the gain of one-dimensional free-electron lasers
NASA Astrophysics Data System (ADS)
Peter, E.; Rizzato, F. B.; Endler, A.
2017-06-01
In the present work, we make use of simplified nonlinear models based on the compressibility factor (Peter et al., Phys. Plasmas, vol. 20 (12), 2013, 123104) to predict the gain of one-dimensional (1-D) free-electron lasers (FELs), considering space-charge and thermal effects. These models proved to be reasonable to estimate some aspects of 1-D FEL theory, such as the position of the onset of mixing, in the case of a initially cold electron beam, and the position of the breakdown of the laminar regime, in the case of an initially warm beam (Peter et al., Phys. Plasmas, vol. 21 (11), 2014, 113104). The results given by the models are compared to wave-particle simulations showing a reasonable agreement.
In order to predict the margin between the dose needed for adverse chemical effects and actual human exposure rates, data on hazard, exposure, and toxicokinetics are needed. In vitro methods, biomonitoring, and mathematical modeling have provided initial estimates for many extant...
Human Factors Evaluation of Advanced Electric Power Grid Visualization Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.; Dauenhauer, Peter M.; Wierks, Tamara G.
This report describes initial human factors evaluation of four visualization tools (Graphical Contingency Analysis, Force Directed Graphs, Phasor State Estimator and Mode Meter/ Mode Shapes) developed by PNNL, and proposed test plans that may be implemented to evaluate their utility in scenario-based experiments.
Analysis of light vehicle crashes and pre-crash scenarios based on the 2000 General Estimates System
DOT National Transportation Integrated Search
2003-02-01
This report analyzes the problem of light vehicle crashes in the United States to support the development and assessment of effective crash avoidance systems as part of the U.S. Department of Transportation's Intelligent Vehicle Initiative. The analy...
SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to calculate acid and neutral hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition states of a ...
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
A LiDAR data-based camera self-calibration method
NASA Astrophysics Data System (ADS)
Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun
2018-07-01
To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.
A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry
NASA Technical Reports Server (NTRS)
Davis, Curt H.
1992-01-01
An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.
Point Cloud Based Relative Pose Estimation of a Satellite in Close Range
Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming
2016-01-01
Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633
Chew, Derek P; Carter, Robert; Rankin, Bree; Boyden, Andrew; Egan, Helen
2010-05-01
The cost effectiveness of a general practice-based program for managing coronary heart disease (CHD) patients in Australia remains uncertain. We have explored this through an economic model. A secondary prevention program based on initial clinical assessment and 3 monthly review, optimising of pharmacotherapies and lifestyle modification, supported by a disease registry and financial incentives for quality of care and outcomes achieved was assessed in terms of incremental cost effectiveness ratio (ICER), in Australian dollars per disability adjusted life year (DALY) prevented. Based on 2006 estimates, 263 487 DALYs were attributable to CHD in Australia. The proposed program would add $115 650 000 to the annual national heath expenditure. Using an estimated 15% reduction in death and disability and a 40% estimated program uptake, the program's ICER is $8081 per DALY prevented. With more conservative estimates of effectiveness and uptake, estimates of up to $38 316 per DALY are observed in sensitivity analysis. Although innovation in CHD management promises improved future patient outcomes, many therapies and strategies proven to reduce morbidity and mortality are available today. A general practice-based program for the optimal application of current therapies is likely to be cost-effective and provide substantial and sustainable benefits to the Australian community.
Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui
2017-01-01
Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments. PMID:28216555
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui
2017-02-14
Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.
On-Orbit Multi-Field Wavefront Control with a Kalman Filter
NASA Technical Reports Server (NTRS)
Lou, John; Sigrist, Norbert; Basinger, Scott; Redding, David
2008-01-01
A document describes a multi-field wavefront control (WFC) procedure for the James Webb Space Telescope (JWST) on-orbit optical telescope element (OTE) fine-phasing using wavefront measurements at the NIRCam pupil. The control is applied to JWST primary mirror (PM) segments and secondary mirror (SM) simultaneously with a carefully selected ordering. Through computer simulations, the multi-field WFC procedure shows that it can reduce the initial system wavefront error (WFE), as caused by random initial system misalignments within the JWST fine-phasing error budget, from a few dozen micrometers to below 50 nm across the entire NIRCam Field of View, and the WFC procedure is also computationally stable as the Monte-Carlo simulations indicate. With the incorporation of a Kalman Filter (KF) as an optical state estimator into the WFC process, the robustness of the JWST OTE alignment process can be further improved. In the presence of some large optical misalignments, the Kalman state estimator can provide a reasonable estimate of the optical state, especially for those degrees of freedom that have a significant impact on the system WFE. The state estimate allows for a few corrections to the optical state to push the system towards its nominal state, and the result is that a large part of the WFE can be eliminated in this step. When the multi-field WFC procedure is applied after Kalman state estimate and correction, the stability of fine-phasing control is much more certain. Kalman Filter has been successfully applied to diverse applications as a robust and optimal state estimator. In the context of space-based optical system alignment based on wavefront measurements, a KF state estimator can combine all available wavefront measurements, past and present, as well as measurement and actuation error statistics to generate a Maximum-Likelihood optimal state estimator. The strength and flexibility of the KF algorithm make it attractive for use in real-time optical system alignment when WFC alone cannot effectively align the system.
2017-01-01
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements. PMID:28890604
Porru, Marcella; Özkan, Leyla
2017-08-30
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements.
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Stoessel, Andrew M; Hale, Cory M; Seabury, Robert W; Miller, Christopher D; Steele, Jeffrey M
2018-01-01
This study aimed to assess the impact of area under the curve (AUC)-based vancomycin monitoring on pharmacist-initiated dose adjustments after transitioning from a trough-only to an AUC-based monitoring method at our institution. A retrospective cohort study of patients treated with vancomycin for complicated methicillin-resistant Staphylococcus aureus (MRSA) infection between November 2013 and December 2016 was conducted. The frequency of pharmacist-initiated dose adjustments was assessed for patients monitored via trough-only and AUC-based approaches for trough ranges: 10 to 14.9 mg/L and 15 to 20 mg/L. Fifty patients were included: 36 in the trough-based monitoring and 14 in the AUC-based-monitoring group. The vancomycin dose was increased in 71.4% of patients when troughs were 10 to 14.9 mg/L when a trough-only approach was used and in only 25% of patients when using AUC estimation ( P = .048). In the AUC group, the dose was increased only when AUC/minimum inhibitory concentration (MIC) <400; unchanged regimens had an estimated AUC/MIC ≥400. The AUC-based monitoring did not significantly increase the frequency of dose reductions when trough concentrations were 15 to 20 mg/L (AUC: 33.3% vs trough: 4.6%; P = .107). The AUC-based monitoring resulted in fewer patients with dose adjustments when trough levels were 10 to 14.9 mg/L. The AUC-based monitoring has the potential to reduce unnecessary vancomycin exposure and warrants further investigation.
Journal: A Review of Some Tracer-Test Design Equations for ...
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.
1972-01-01
Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.
Electrically heated particulate filter restart strategy
Gonze, Eugene V [Pinckney, MI; Ament, Frank [Troy, MI
2011-07-12
A control system that controls regeneration of a particulate filter is provided. The system generally includes a propagation module that estimates a propagation status of combustion of particulate matter in the particulate filter. A regeneration module controls current to the particulate filter to re-initiate regeneration based on the propagation status.
WEPP model implementation project with the USDA-Natural Resources Conservation Service
USDA-ARS?s Scientific Manuscript database
The Water Erosion Prediction Project (WEPP) is a physical process-based soil erosion model that can be used to estimate runoff, soil loss, and sediment yield from hillslope profiles, fields, and small watersheds. Initially developed from 1985-1995, WEPP has been applied and validated across a wide r...
NASA Astrophysics Data System (ADS)
Ostrikov, V. N.; Plakhotnikov, O. V.
2014-12-01
Using considerable experimental material, we examine whether it is possible to recalculate the initial data of hyperspectral aircraft survey into spectral radiance factors (SRF). The errors of external calibration for various observation conditions and different instruments for data receiving are estimated.
Imbibition of hydraulic fracturing fluids into partially saturated shale
NASA Astrophysics Data System (ADS)
Birdsell, Daniel T.; Rajaram, Harihar; Lackey, Greg
2015-08-01
Recent studies suggest that imbibition of hydraulic fracturing fluids into partially saturated shale is an important mechanism that restricts their migration, thus reducing the risk of groundwater contamination. We present computations of imbibition based on an exact semianalytical solution for spontaneous imbibition. These computations lead to quantitative estimates of an imbibition rate parameter (A) with units of LT-1/2 for shale, which is related to porous medium and fluid properties, and the initial water saturation. Our calculations suggest that significant fractions of injected fluid volumes (15-95%) can be imbibed in shale gas systems, whereas imbibition volumes in shale oil systems is much lower (3-27%). We present a nondimensionalization of A, which provides insights into the critical factors controlling imbibition, and facilitates the estimation of A based on readily measured porous medium and fluid properties. For a given set of medium and fluid properties, A varies by less than factors of ˜1.8 (gas nonwetting phase) and ˜3.4 (oil nonwetting phase) over the range of initial water saturations reported for the Marcellus shale (0.05-0.6). However, for higher initial water saturations, A decreases significantly. The intrinsic permeability of the shale and the viscosity of the fluids are the most important properties controlling the imbibition rate.
Wang, Lutao; Xiao, Jun; Chai, Hua
2015-08-01
The successful suppression of clutter arising from stationary or slowly moving tissue is one of the key issues in medical ultrasound color blood imaging. Remaining clutter may cause bias in the mean blood frequency estimation and results in a potentially misleading description of blood-flow. In this paper, based on the principle of general wall-filter, the design process of three classes of filters, infinitely impulse response with projection initialization (Prj-IIR), polynomials regression (Pol-Reg), and eigen-based filters are previewed and analyzed. The performance of the filters was assessed by calculating the bias and variance of a mean blood velocity using a standard autocorrelation estimator. Simulation results show that the performance of Pol-Reg filter is similar to Prj-IIR filters. Both of them can offer accurate estimation of mean blood flow speed under steady clutter conditions, and the clutter rejection ability can be enhanced by increasing the ensemble size of Doppler vector. Eigen-based filters can effectively remove the non-stationary clutter component, and further improve the estimation accuracy for low speed blood flow signals. There is also no significant increase in computation complexity for eigen-based filters when the ensemble size is less than 10.
Real-time video analysis for retail stores
NASA Astrophysics Data System (ADS)
Hassan, Ehtesham; Maurya, Avinash K.
2015-03-01
With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Cost Sharing, Family Health Care Burden, and the Use of Specialty Drugs for Rheumatoid Arthritis
Karaca-Mandic, Pinar; Joyce, Geoffrey F; Goldman, Dana P; Laouri, Marianne
2010-01-01
Objectives To examine the impact of benefit generosity and household health care financial burden on the demand for specialty drugs in the treatment of rheumatoid arthritis (RA). Data Sources/Study Setting Enrollment, claims, and benefit design information for 35 large private employers during 2000–2005. Study Design We estimated multivariate models of the effects of benefit generosity and household financial burden on initiation and continuation of biologic therapies. Data Extraction Methods We defined initiation of biologic therapy as first-time use of etanercept, adalimumab, or infliximab, and we constructed an index of plan generosity based on coverage of biologic therapies in each plan. We estimated the household's burden by summing up the annual out-of-pocket (OOP) expenses of other family members. Principal Findings Benefit generosity affected both the likelihood of initiating a biologic and continuing drug therapy, although the effects were stronger for initiation. Initiation of a biologic was lower in households where other family members incurred high OOP expenses. Conclusions The use of biologic therapy for RA is sensitive to benefit generosity and household financial burden. The increasing use of coinsurance rates for specialty drugs (as under Medicare Part D) raises concern about adverse health consequences. PMID:20831715
Estimation of effective connectivity via data-driven neural modeling
Freestone, Dean R.; Karoly, Philippa J.; Nešić, Dragan; Aram, Parham; Cook, Mark J.; Grayden, David B.
2014-01-01
This research introduces a new method for functional brain imaging via a process of model inversion. By estimating parameters of a computational model, we are able to track effective connectivity and mean membrane potential dynamics that cannot be directly measured using electrophysiological measurements alone. The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, under the assumption the model captures the key features of the cortical circuits of interest, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements. The method is based on approximating brain networks using an interconnected neural population model. The neural population model is based on a neural mass model that describes the functional activity of the brain, capturing the mesoscopic biophysics and anatomical structure. The model is made subject-specific by estimating the strength of intra-cortical connections within a region and inter-cortical connections between regions using a novel Kalman filtering method. We demonstrate through simulation how the framework can be used to track the mechanisms involved in seizure initiation and termination. PMID:25506315
Measuring and Specifying Combinatorial Coverage of Test Input Configurations
Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu
2015-01-01
A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.
2005-01-01
This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.
2018-01-01
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G‐computation. All methods resulted in essentially unbiased estimation of the population dose‐response function. However, GPS‐based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. PMID:29508424
Austin, Peter C
2018-05-20
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G-computation. All methods resulted in essentially unbiased estimation of the population dose-response function. However, GPS-based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Carriquiry, Gabriela; Fink, Valeria; Koethe, John Robert; Giganti, Mark Joseph; Jayathilake, Karu; Blevins, Meridith; Cahn, Pedro; Grinsztejn, Beatriz; Wolff, Marcelo; Pape, Jean William; Padgett, Denis; Madero, Juan Sierra; Gotuzzo, Eduardo; McGowan, Catherine Carey; Shepherd, Bryan Earl
2015-01-01
Introduction Long-term survival of HIV patients after initiating highly active antiretroviral therapy (ART) has not been sufficiently described in Latin America and the Caribbean, as compared to other regions. The aim of this study was to describe the incidence of mortality, loss to follow-up (LTFU) and associated risk factors for patients enrolled in the Caribbean, Central and South America Network (CCASAnet). Methods We assessed time from ART initiation (baseline) to death or LTFU between 2000 and 2014 among ART-naïve adults (≥18 years) from sites in seven countries included in CCASAnet: Argentina, Brazil, Chile, Haiti, Honduras, Mexico and Peru. Kaplan-Meier techniques were used to estimate the probability of mortality over time. Risk factors for death were assessed using Cox regression models stratified by site and adjusted for sex, baseline age, nadir pre-ART CD4 count, calendar year of ART initiation, clinical AIDS at baseline and type of ART regimen. Results A total of 16,996 ART initiators were followed for a median of 3.5 years (interquartile range (IQR): 1.6–6.2). The median age at ART initiation was 36 years (IQR: 30–44), subjects were predominantly male (63%), median CD4 count was 156 cells/µL (IQR: 60–251) and 26% of subjects had clinical AIDS prior to starting ART. Initial ART regimens were predominantly non-nucleoside reverse transcriptase inhibitor based (86%). The cumulative incidence of LTFU five years after ART initiation was 18.2% (95% confidence interval (CI) 17.5–18.8%). A total of 1582 (9.3%) subjects died; the estimated probability of death one, three and five years after ART initiation was 5.4, 8.3 and 10.3%, respectively. The estimated five-year mortality probability varied substantially across sites, from 3.5 to 14.0%. Risk factors for death were clinical AIDS at baseline (adjusted hazard ratio (HR)=1.65 (95% CI 1.47–1.87); p<0.001), lower baseline CD4 (HR=1.95 (95% CI 1.63–2.32) for 50 vs. 350 cells/µL; p<0.001) and older age (HR=1.47 (95% CI 1.29–1.69) for 50 vs. 30 years at ART initiation; p<0.001). Conclusions In this large, long-term study of mortality among HIV-positive adults initiating ART in Latin America and the Caribbean, overall estimates of mortality were heterogeneous, generally falling between those reported in high-income countries and sub-Saharan Africa. PMID:26165322
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Siebert, U; Sroczynski, G; Rossol, S; Wasem, J; Ravens-Sieberer, U; Kurth, B M; Manns, M P; McHutchison, J G; Wong, J B
2003-03-01
Peginterferon alpha-2b plus ribavirin therapy in previously untreated patients with chronic hepatitis C yields the highest sustained virological response rates of any treatment strategy but is expensive. To estimate the cost effectiveness of treatment with peginterferon alpha-2b plus ribavirin compared with interferon alpha-2b plus ribavirin for initial treatment of patients with chronic hepatitis C. Individual patient level data from a randomised clinical trial with peginterferon plus ribavirin were applied to a previously published and validated Markov model to project lifelong clinical outcomes. Quality of life and economic estimates were based on German patient data. We used a societal perspective and applied a 3% annual discount rate. Compared with no antiviral therapy, peginterferon plus fixed or weight based dosing of ribavirin increased life expectancy by 4.2 and 4.7 years, respectively. Compared with standard interferon alpha-2b plus ribavirin, peginterferon plus fixed or weight based dosing of ribavirin increased life expectancy by 0.5 and by 1.0 years with incremental cost effectiveness ratios of 11,800 euros and 6600 euros per quality adjusted life year (QALY), respectively. Subgroup analyses by genotype, viral load, sex, and histology showed that peginterferon plus weight based ribavirin remained cost effective compared with other well accepted medical treatments. Peginterferon alpha-2b plus ribavirin should reduce the incidence of liver complications, prolong life, improve quality of life, and be cost effective for the initial treatment of chronic hepatitis C.
Physics-based coastal current tomographic tracking using a Kalman filter.
Wang, Tongchen; Zhang, Ying; Yang, T C; Chen, Huifang; Xu, Wen
2018-05-01
Ocean acoustic tomography can be used based on measurements of two-way travel-time differences between the nodes deployed on the perimeter of the surveying area to invert/map the ocean current inside the area. Data at different times can be related using a Kalman filter, and given an ocean circulation model, one can in principle now cast and even forecast current distribution given an initial distribution and/or the travel-time difference data on the boundary. However, an ocean circulation model requires many inputs (many of them often not available) and is unpractical for estimation of the current field. A simplified form of the discretized Navier-Stokes equation is used to show that the future velocity state is just a weighted spatial average of the current state. These weights could be obtained from an ocean circulation model, but here in a data driven approach, auto-regressive methods are used to obtain the time and space dependent weights from the data. It is shown, based on simulated data, that the current field tracked using a Kalman filter (with an arbitrary initial condition) is more accurate than that estimated by the standard methods where data at different times are treated independently. Real data are also examined.
The economics of treatment for infants with respiratory distress syndrome.
Neil, N; Sullivan, S D; Lessler, D S
1998-01-01
To define clinical outcomes and prevailing patterns of care for the initial hospitalization of infants at greatest risk for respiratory distress syndrome (RDS); to estimate direct medical care costs associated with the initial hospitalization; and to introduce and demonstrate a simulation technique for the economic evaluation of health care technologies. Clinical outcomes and usual-care algorithms were determined for infants with RDS in three birthweight categories (500-1,000g; >1,000-1,500g; and >1,500g) using literature- and expert-panel-based data. The experts were practitioners from major U.S. hospitals who were directly involved in the clinical care of such infants. Using the framework derived from the usual care patterns and outcomes, the authors developed an itemized "micro-costing" economic model to simulate the costs associated with the initial hospitalization of a hypothetical RDS patient. The model is computerized and dynamic; unit costs, frequencies, number of days, probabilities and population multipliers are all variable and can be modified on the basis of new information or local conditions. Aggregated unit costs are used to estimate the expected medical costs of treatment per patient. Expected costs of initial hospitalization per uncomplicated surviving infant with RDS were estimated to be $101,867 for 500-1,000g infants; $64,524 for >1,000-1,500g infants; and $27,224 for >1,500g infants. Incremental costs of complications among survivors were estimated to be $22,155 (500-1,000g); $11,041 (>1,000-1,500g); and $2,448 (>1,500 g). Expected costs of initial hospitalization per case (including non-survivors) were $100,603; $72,353; and $28,756, respectively. An itemized model such as the one developed here serves as a benchmark for the economic assessment of treatment costs and utilization. Moreover, it offers a powerful tool for the prospective evaluation of new technologies or procedures designed to reduce the incidence of, severity of, and/or total hospital resource use ascribed to RDS.
A hydroclimatological approach to predicting regional landslide probability using Landlab
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.
2018-02-01
We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.
Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
Wang, Zhen; Jin, Bingwen; Geng, Weidong
2017-01-01
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765
Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects
NASA Astrophysics Data System (ADS)
Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.
2013-07-01
As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.
Angeletti, C; Pezzotti, P; Antinori, A; Mammone, A; Navarra, A; Orchi, N; Lorenzini, P; Mecozzi, A; Ammassari, A; Murachelli, S; Ippolito, G; Girardi, E
2014-03-01
Combination antiretroviral therapy (cART) has become the main driver of total costs of caring for persons living with HIV (PLHIV). The present study estimated the short/medium-term cost trends in response to the recent evolution of national guidelines and regional therapeutic protocols for cART in Italy. We developed a deterministic mathematical model that was calibrated using epidemic data for Lazio, a region located in central Italy with about six million inhabitants. In the Base Case Scenario, the estimated number of PLHIV in the Lazio region increased over the period 2012-2016 from 14 414 to 17 179. Over the same period, the average projected annual cost for treating the HIV-infected population was €147.0 million. An earlier cART initiation resulted in a rise of 2.3% in the average estimated annual cost, whereas an increase from 27% to 50% in the proportion of naïve subjects starting cART with a nonnucleoside reverse transcriptase inhibitor (NNRTI)-based regimen resulted in a reduction of 0.3%. Simplification strategies based on NNRTIs co-formulated in a single tablet regimen and protease inhibitor/ritonavir-boosted monotherapy produced an overall reduction in average annual costs of 1.5%. A further average saving of 3.3% resulted from the introduction of generic antiretroviral drugs. In the medium term, cost saving interventions could finance the increase in costs resulting from the inertial growth in the number of patients requiring treatment and from the earlier treatment initiation recommended in recent guidelines. © 2013 British HIV Association.
Optimizing Spectral Wave Estimates with Adjoint-Based Sensitivity Maps
2014-02-18
J, Orzech MD, Ngodock HE (2013) Validation of a wave data assimilation system based on SWAN. Geophys Res Abst, (15), EGU2013-5951-1, EGU General ...surface wave spectra. Sensitivity maps are generally constructed for a selected system indicator (e.g., vorticity) by computing the differential of...spectral action balance Eq. 2, generally initialized at the off- shore boundary with spectral wave and other outputs from regional models such as
Cost Estimate for Molybdenum and Tantalum Refractory Metal Alloy Flow Circuit Concepts
NASA Technical Reports Server (NTRS)
Hickman, Robert R.; Martin, James J.; Schmidt, George R.; Godfroy, Thomas J.; Bryhan, A.J.
2010-01-01
The Early Flight Fission-Test Facilities (EFF-TF) team at NASA Marshall Space Flight Center (MSFC) has been tasked by the Naval Reactors Prime Contract Team (NRPCT) to provide a cost and delivery rough order of magnitude estimate for a refractory metal-based lithium (Li) flow circuit. The design is based on the stainless steel Li flow circuit that is currently being assembled for an NRPCT task underway at the EFF-TF. While geometrically the flow circuit is not representative of a final flight prototype, knowledge has been gained to quantify (time and cost) the materials, manufacturing, fabrication, assembly, and operations to produce a testable configuration. This Technical Memorandum (TM) also identifies the following key issues that need to be addressed by the fabrication process: Alloy selection and forming, cost and availability, welding, bending, machining, assembly, and instrumentation. Several candidate materials were identified by NRPCT including molybdenum (Mo) alloy (Mo-47.5 %Re), tantalum (Ta) alloys (T-111, ASTAR-811C), and niobium (Nb) alloy (Nb-1 %Zr). This TM is focused only on the Mo and Ta alloys, since they are of higher concern to the ongoing effort. The initial estimate to complete a Mo-47%Re system ready for testing is =$9,000k over a period of 30 mo. The initial estimate to complete a T-111 or ASTAR-811C system ready for testing is =$12,000k over a period of 36 mo.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
Fracture Surface Analysis of Clinically Failed Fixed Partial Dentures
Taskonak, B.; Mecholsky, J.J.; Anusavice, K.J.
2008-01-01
Ceramic systems have limited long-term fracture resistance, especially when they are used in posterior areas or for fixed partial dentures. The objective of this study was to determine the site of crack initiation and the causes of fracture of clinically failed ceramic fixed partial dentures. Six Empress 2® lithia-disilicate (Li2O·2SiO2)-based veneered bridges and 7 experimental lithia-disilicate-based non-veneered ceramic bridges were retrieved and analyzed. Fractography and fracture mechanics methods were used to estimate the stresses at failure in 6 bridges (50%) whose fracture initiated from the occlusal surface of the connectors. Fracture of 1 non-veneered bridge (8%) initiated within the gingival surface of the connector. Three veneered bridges fractured within the veneer layers. Failure stresses of the all-core fixed partial dentures ranged from 107 to 161 MPa. Failure stresses of the veneered fixed partial dentures ranged from 19 to 68 MPa. We conclude that fracture initiation sites are controlled primarily by contact damage. PMID:16498078
Initial planetary base construction techniques and machine implementation
NASA Technical Reports Server (NTRS)
Crockford, William W.
1987-01-01
Conceptual designs of (1) initial planetary base structures, and (2) an unmanned machine to perform the construction of these structures using materials local to the planet are presented. Rock melting is suggested as a possible technique to be used by the machine in fabricating roads, platforms, and interlocking bricks. Identification of problem areas in machine design and materials processing is accomplished. The feasibility of the designs is contingent upon favorable results of an analysis of the engineering behavior of the product materials. The analysis requires knowledge of several parameters for solution of the constitutive equations of the theory of elasticity. An initial collection of these parameters is presented which helps to define research needed to perform a realistic feasibility study. A qualitative approach to estimating power and mass lift requirements for the proposed machine is used which employs specifications of currently available equipment. An initial, unmanned mission scenario is discussed with emphasis on identifying uncompleted tasks and suggesting design considerations for vehicles and primitive structures which use the products of the machine processing.
Tracer-Test Planning Using the Efficient Hydrologic Tracer ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be
EFFICIENT HYDROLOGICAL TRACER-TEST DESIGN (EHTD ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarlata, C.; Mosey, G.
2013-05-01
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Former Chanute Air Force Base site in Rantoul, Illinois, for a feasibility study of renewable energy production. The National Renewable Energy Laboratory (NREL) was contacted to provide technical assistance for this project. The purpose of this study was to assess the site for a possible biopower system installation and estimate the cost, performance, and impacts of different biopower options.
Analysing Twitter and web queries for flu trend prediction.
Santos, José Carlos; Matos, Sérgio
2014-05-07
Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p<0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.
Disaster debris estimation using high-resolution polarimetric stereo-SAR
NASA Astrophysics Data System (ADS)
Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki
2016-10-01
This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.
Bunnell, Rebecca; O'Neil, Dara; Soler, Robin; Payne, Rebecca; Giles, Wayne H; Collins, Janet; Bauer, Ursula
2012-10-01
The burden of preventable chronic diseases is straining our nation's health and economy. Diseases caused by obesity and tobacco use account for the largest portions of this preventable burden. CDC funded 50 communities in 2010 to implement policy, systems, and environmental (PSE) interventions in a 2-year initiative. Funded communities developed PSE plans to reduce obesity, tobacco use, and second-hand smoke exposure for their combined 55 million residents. Community outcome objectives and milestones were categorized by PSE interventions as they related to media, access, promotion, pricing, and social support. Communities estimated population reach based on their jurisdiction's census data and target populations. The average proportion of each community's population that was reached was calculated for each intervention category. Outcome objectives that were achieved within 12 months of program initiation were identified from routine program records. The average proportion of a community's jurisdictional population reached by a specific intervention varied across interventions. Mean population reach for obesity-prevention interventions was estimated at 35%, with 14 (26%) interventions covering over 50% of the jurisdictional populations. For tobacco prevention, mean population reach was estimated at 67%, with 16 (84%) interventions covering more than 50% of the jurisdictional populations. Within 12 months, communities advanced over one-third of their obesity and tobacco-use prevention strategies. Tobacco interventions appeared to have higher potential population reach than obesity interventions within this initiative. Findings on the progress and potential reach of this major initiative may help inform future chronic disease prevention efforts.
Characterization of Impact Initiation of Aluminum-Based Powder Compacts
NASA Astrophysics Data System (ADS)
Tucker, Michael; Dixon, Sean; Thadhani, Naresh
2011-06-01
Impact initiation of reactions in quasi-statically pressed powder compacts of Al-Ni, Al-Ta, and Al-W powder compacts is investigated in an effort to characterize the differences in the energy threshold as a function of materials system, volumetric distribution, and environment. The powder compacts were mounted in front of a copper projectile and impacted onto a steel anvil using a 7.62 mm gas gun at velocities up to 500 m/s. The experiments were conducted in ambient environment, as well as under a 50 millitorr vacuum. The IMACON 200 framing camera was used to observe the transient powder compact densification and deformation states, as well as a signature of reaction based on light emission. Evidence of reaction was also confirmed based on post-mortem XRD analysis of the recovered residue. The effective kinetic energy, dissipated in processes leading to reaction initiation was estimated and correlated with reactivity of the various compacts as a function of composition and environment.
Reliability based design including future tests and multiagent approaches
NASA Astrophysics Data System (ADS)
Villanueva, Diane
The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.
Jastram, John D.; Moyer, Douglas; Hyer, Kenneth
2009-01-01
Fluvial transport of sediment into the Chesapeake Bay estuary is a persistent water-quality issue with major implications for the overall health of the bay ecosystem. Accurately and precisely estimating the suspended-sediment concentrations (SSC) and loads that are delivered to the bay, however, remains challenging. Although manual sampling of SSC produces an accurate series of point-in-time measurements, robust extrapolation to unmeasured periods (especially highflow periods) has proven to be difficult. Sediment concentrations typically have been estimated using regression relations between individual SSC values and associated streamflow values; however, suspended-sediment transport during storm events is extremely variable, and it is often difficult to relate a unique SSC to a given streamflow. With this limitation for estimating SSC, innovative approaches for generating detailed records of suspended-sediment transport are needed. One effective method for improved suspended-sediment determination involves the continuous monitoring of turbidity as a surrogate for SSC. Turbidity measurements are theoretically well correlated to SSC because turbidity represents a measure of water clarity that is directly influenced by suspended sediments; thus, turbidity-based estimation models typically are effective tools for generating SSC data. The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency Chesapeake Bay Program and Virginia Department of Environmental Quality, initiated continuous turbidity monitoring on three major tributaries of the bay - the James, Rappahannock, and North Fork Shenandoah Rivers - to evaluate the use of turbidity as a sediment surrogate in rivers that deliver sediment to the bay. Results of this surrogate approach were compared to the traditionally applied streamflow-based approach for estimating SSC. Additionally, evaluation and comparison of these two approaches were conducted for nutrient estimations. Results demonstrate that the application of turbidity-based estimation models provides an improved method for generating a continuous record of SSC, relative to the classical approach that uses streamflow as a surrogate for SSC. Turbidity-based estimates of SSC were found to be more accurate and precise than SSC estimates from streamflow-based approaches. The turbidity-based SSC estimation models explained 92 to 98 percent of the variability in SSC, while streamflow-based models explained 74 to 88 percent of the variability in SSC. Furthermore, the mean absolute error of turbidity-based SSC estimates was 50 to 87 percent less than the corresponding values from the streamflow-based models. Statistically significant differences were detected between the distributions of residual errors and estimates from the two approaches, indicating that the turbidity-based approach yields estimates of SSC with greater precision than the streamflow-based approach. Similar improvements were identified for turbidity-based estimates of total phosphorus, which is strongly related to turbidity because total phosphorus occurs predominantly in particulate form. Total nitrogen estimation models based on turbidity and streamflow generated estimates of similar quality, with the turbidity-based models providing slight improvements in the quality of estimations. This result is attributed to the understanding that nitrogen transport is dominated by dissolved forms that relate less directly to streamflow and turbidity. Improvements in concentration estimation resulted in improved estimates of load. Turbidity-based suspended-sediment loads estimated for the James River at Cartersville, VA, monitoring station exhibited tighter confidence interval bounds and a coefficient of variation of 12 percent, compared with a coefficient of variation of 38 percent for the streamflow-based load.
Geodesic regression for image time-series.
Niethammer, Marc; Huang, Yang; Vialard, François-Xavier
2011-01-01
Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.
Audio-visual speech cue combination.
Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick
2010-04-16
Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.
Risk and the physics of clinical prediction.
McEvoy, John W; Diamond, George A; Detrano, Robert C; Kaul, Sanjay; Blaha, Michael J; Blumenthal, Roger S; Jones, Steven R
2014-04-15
The current paradigm of primary prevention in cardiology uses traditional risk factors to estimate future cardiovascular risk. These risk estimates are based on prediction models derived from prospective cohort studies and are incorporated into guideline-based initiation algorithms for commonly used preventive pharmacologic treatments, such as aspirin and statins. However, risk estimates are more accurate for populations of similar patients than they are for any individual patient. It may be hazardous to presume that the point estimate of risk derived from a population model represents the most accurate estimate for a given patient. In this review, we exploit principles derived from physics as a metaphor for the distinction between predictions regarding populations versus patients. We identify the following: (1) predictions of risk are accurate at the level of populations but do not translate directly to patients, (2) perfect accuracy of individual risk estimation is unobtainable even with the addition of multiple novel risk factors, and (3) direct measurement of subclinical disease (screening) affords far greater certainty regarding the personalized treatment of patients, whereas risk estimates often remain uncertain for patients. In conclusion, shifting our focus from prediction of events to detection of disease could improve personalized decision-making and outcomes. We also discuss innovative future strategies for risk estimation and treatment allocation in preventive cardiology. Copyright © 2014 Elsevier Inc. All rights reserved.
Adaptive and Personalized Plasma Insulin Concentration Estimation for Artificial Pancreas Systems.
Hajizadeh, Iman; Rashid, Mudassir; Samadi, Sediqeh; Feng, Jianyuan; Sevil, Mert; Hobbs, Nicole; Lazaro, Caterina; Maloney, Zacharie; Brandt, Rachel; Yu, Xia; Turksoy, Kamuran; Littlejohn, Elizabeth; Cengiz, Eda; Cinar, Ali
2018-05-01
The artificial pancreas (AP) system, a technology that automatically administers exogenous insulin in people with type 1 diabetes mellitus (T1DM) to regulate their blood glucose concentrations, necessitates the estimation of the amount of active insulin already present in the body to avoid overdosing. An adaptive and personalized plasma insulin concentration (PIC) estimator is designed in this work to accurately quantify the insulin present in the bloodstream. The proposed PIC estimation approach incorporates Hovorka's glucose-insulin model with the unscented Kalman filtering algorithm. Methods for the personalized initialization of the time-varying model parameters to individual patients for improved estimator convergence are developed. Data from 20 three-days-long closed-loop clinical experiments conducted involving subjects with T1DM are used to evaluate the proposed PIC estimation approach. The proposed methods are applied to the clinical data containing significant disturbances, such as unannounced meals and exercise, and the results demonstrate the accurate real-time estimation of the PIC with the root mean square error of 7.15 and 9.25 mU/L for the optimization-based fitted parameters and partial least squares regression-based testing parameters, respectively. The accurate real-time estimation of PIC will benefit the AP systems by preventing overdelivery of insulin when significant insulin is present in the bloodstream.
3D motion and strain estimation of the heart: initial clinical findings
NASA Astrophysics Data System (ADS)
Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan
2010-03-01
The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newsom, R. K.; Sivaraman, C.; Shippert, T. R.
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosismore » from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.« less
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
Market-driven emissions from recovery of carbon dioxide gas.
Supekar, Sarang D; Skerlos, Steven J
2014-12-16
This article uses a market-based allocation method in a consequential life cycle assessment (LCA) framework to estimate the environmental emissions created by recovering carbon dioxide (CO2). We find that 1 ton of CO2 recovered as a coproduct of chemicals manufacturing leads to additional greenhouse gas emissions of 147-210 kg CO2 eq , while consuming 160-248 kWh of electricity, 254-480 MJ of heat, and 1836-4027 kg of water. The ranges depend on the initial and final purity of the CO2, particularly because higher purity grades require additional processing steps such as distillation, as well as higher temperature and flow rate of regeneration as needed for activated carbon treatment and desiccant beds. Higher purity also reduces process efficiency due to increased yield losses from regeneration gas and distillation reflux. Mass- and revenue-based allocation methods used in attributional LCA estimate that recovering CO2 leads to 19 and 11 times the global warming impact estimated from a market-based allocation used in consequential LCA.
Cesar, Carina; Shepherd, Bryan E.; Krolewiecki, Alejandro J.; Fink, Valeria I.; Schechter, Mauro; Tuboi, Suely H.; Wolff, Marcelo; Pape, Jean W.; Leger, Paul; Padgett, Denis; Madero, Juan Sierra; Gotuzzo, Eduardo; Sued, Omar; McGowan, Catherine C.; Masys, Daniel R.; Cahn, Pedro E.
2010-01-01
Background HAART rollout in Latin America and the Caribbean has increased from approximately 210,000 in 2003 to 390,000 patients in 2007, covering 62% (51%–70%) of eligible patients, with considerable variation among countries. No multi-cohort study has examined rates of and reasons for change of initial HAART in this region. Methodology Antiretroviral-naïve patients > = 18 years who started HAART between 1996 and 2007 and had at least one follow-up visit from sites in Argentina, Brazil, Chile, Haiti, Honduras, Mexico and Peru were included. Time from HAART initiation to change (stopping or switching any antiretrovirals) was estimated using Kaplan-Meier techniques. Cox proportional hazards modeled the associations between change and demographics, initial regimen, baseline CD4 count, and clinical stage. Principal Findings Of 5026 HIV-infected patients, 35% were female, median age at HAART initiation was 37 years (interquartile range [IQR], 31–44), and median CD4 count was 105 cells/uL (IQR, 38–200). Estimated probabilities of changing within 3 months and one year of HAART initiation were 16% (95% confidence interval (CI) 15–17%) and 28% (95% CI 27–29%), respectively. Efavirenz-based regimens and no clinical AIDS at HAART initiation were associated with lower risk of change (hazard ratio (HR) = 1.7 (95% CI 1.1–2.6) and 2.1 (95% CI 1.7–2.5) comparing neverapine-based regimens and other regimens to efavirenz, respectively; HR = 1.3 (95% CI 1.1–1.5) for clinical AIDS at HAART initiation). The primary reason for change among HAART initiators were adverse events (14%), death (5.7%) and failure (1.3%) with specific toxicities varying among sites. After change, most patients remained in first line regimens. Conclusions Adverse events were the leading cause for changing initial HAART. Predictors for change due to any reason were AIDS at baseline and the use of a non-efavirenz containing regimen. Differences between participant sites were observed and require further investigation. PMID:20531956
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu
2018-01-01
Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
CH-47F Improved Cargo Helicopter (CH-47F)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
Zhao, Tong; Liu, Kai; Takei, Masahiro
2016-01-01
The inertial migration of neutrally buoyant spherical particles in high particle concentration (αpi > 3%) suspension flow in a square microchannel was investigated by means of the multi-electrodes sensing method which broke through the limitation of conventional optical measurement techniques in the high particle concentration suspensions due to interference from the large particle numbers. Based on the measured particle concentrations near the wall and at the corner of the square microchannel, particle cross-sectional migration ratios are calculated to quantitatively estimate the migration degree. As a result, particle migration to four stable equilibrium positions near the centre of each face of the square microchannel is found only in the cases of low initial particle concentration up to 5.0 v/v%, while the migration phenomenon becomes partial as the initial particle concentration achieves 10.0 v/v% and disappears in the cases of the initial particle concentration αpi ≥ 15%. In order to clarify the influential mechanism of particle-particle interaction on particle migration, an Eulerian-Lagrangian numerical model was proposed by employing the Lennard-Jones potential as the inter-particle potential, while the inertial lift coefficient is calculated by a pre-processed semi-analytical simulation. Moreover, based on the experimental and simulation results, a dimensionless number named migration index was proposed to evaluate the influence of the initial particle concentration on the particle migration phenomenon. The migration index less than 0.1 is found to denote obvious particle inertial migration, while a larger migration index denotes the absence of it. This index is helpful for estimation of the maximum initial particle concentration for the design of inertial microfluidic devices. PMID:27158288
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
2012-04-25
Virginia Tech VAL. Because of the excellent performance of the Trimble-based systems that were tested in the past, the Trimble subsidy Applanix was...initially contacted for available systems. The lowest cost, turnkey Trimble/ Applanix the POS LV 210 far exceeded the performance requirements of the
Initial evaluation of floor cooling on lactating sows under severe acute heat stress
USDA-ARS?s Scientific Manuscript database
The objectives were to evaluate an acute heat stress protocol for lactating sows and evaluate preliminary estimates of water flow rates required to cool sows. Twelve multiparous sows were provided with a cooling pad built with an aluminum plate surface, high-density polyethylene base and copper pipe...
A UAV and S2A data-based estimation of the initial biomass of green algae in the South Yellow Sea.
Xu, Fuxiang; Gao, Zhiqiang; Jiang, Xiaopeng; Shang, Weitao; Ning, Jicai; Song, Debin; Ai, Jinquan
2018-03-01
Previous studies have shown that the initial biomass of green tide was the green algae attaching to Pyropia aquaculture rafts in the Southern Yellow Sea. In this study, the green algae was identified with unmanned aerial vehicle (UAV), an biomass estimation model was proposed for green algae biomass in the radial sand ridge area based on Sentinel-2A image (S2A) and UAV images. The result showed that the green algae was detected highly accurately with the normalized green-red difference index (NGRDI); approximately 1340 tons and 700 tons of green algae were attached to rafts and raft ropes respectively, and the lower biomass might be the main cause for the smaller scale of green tide in 2017. In addition, UAV play an important role in raft-attaching green algae monitoring and long-term research of its biomass would provide a scientific basis for the control and forecast of green tide in the Yellow Sea. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Taroni, Paola; Paganoni, Anna Maria; Ieva, Francesca; Pifferi, Antonio; Quarto, Giovanna; Abbate, Francesca; Cassano, Enrico; Cubeddu, Rinaldo
2017-01-01
Several techniques are being investigated as a complement to screening mammography, to reduce its false-positive rate, but results are still insufficient to draw conclusions. This initial study explores time domain diffuse optical imaging as an adjunct method to classify non-invasively malignant vs benign breast lesions. We estimated differences in tissue composition (oxy- and deoxyhemoglobin, lipid, water, collagen) and absorption properties between lesion and average healthy tissue in the same breast applying a perturbative approach to optical images collected at 7 red-near infrared wavelengths (635-1060 nm) from subjects bearing breast lesions. The Discrete AdaBoost procedure, a machine-learning algorithm, was then exploited to classify lesions based on optically derived information (either tissue composition or absorption) and risk factors obtained from patient’s anamnesis (age, body mass index, familiarity, parity, use of oral contraceptives, and use of Tamoxifen). Collagen content, in particular, turned out to be the most important parameter for discrimination. Based on the initial results of this study the proposed method deserves further investigation.
NASA Astrophysics Data System (ADS)
Gajek, Andrzej
2016-09-01
The article presents diagnostics monitor for control of the efficiency of brakes in various road conditions in cars equipped with pressure sensor in brake (ESP) system. Now the brake efficiency of the vehicles is estimated periodically in the stand conditions on the base of brake forces measurement or in the road conditions on the base of the brake deceleration. The presented method allows to complete the stand - periodical tests of the brakes by current on board diagnostics system OBD for brakes. First part of the article presents theoretical dependences between deceleration of the vehicle and brake pressure. The influence of the vehicle mass, initial speed of braking, temperature of brakes, aerodynamic drag, rolling resistance, engine resistance, state of the road surface, angle of the road sloping on the deceleration have been analysed. The manner of the appointed of these parameters has been analysed. The results of the initial investigation have been presented. At the end of the article the strategy of the estimation and signalization of the irregular value of the deceleration are presented.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Lundstrom, C.; Panno, S.; Hackley, K. C.; Fouke, B. W.; Curry, B.
2009-12-01
The recurrence interval of large New Madrid Seismic Zone (NMSZ) earthquakes is uncertain because of the limited number and likely incomplete nature of the record of dated seismic events. Data on paleoseismicity in this area is necessary for refining estimates of a recurrence interval for these earthquakes and for characterizing the geophysical nature of the NMSZ. Studies of the paleoseismic history of the NMSZ have previously used liquefaction features and flood plain deposits along the Mississippi River to estimate recurrence interval with considerable uncertainties. More precise estimates of the number and ages of paleoseismic events would enhance the ability of federal, state, and local agencies to make critical preparedness decisions. Initiation of new speleothems (cave deposits) has been shown in several localities to record large earthquake events. Our ongoing work in caves of southwestern Illinois, Missouri, Indiana and Arkansas has used both U/Th age dating techniques and growth laminae counting of actively growing stalagmites to determine the age of initiation of stalagmites in caves across the Midwestern U.S. These age initiations cluster around two known events, the great NMSZ earthquakes of 1811-1812 and the Missouri earthquake of 1917, suggesting that cave deposits in this region constitute a unique record of paleo-seismic history of the NMSZ. Furthermore, the U-Th disequilibria growth laminae ages of young, white stalagmites and of older stalagmites on which they grew, plus published Holocene stalagmite ages of initiation and regrowth from Missouri caves, are all coincident with suspected NMSZ earthquakes based on liquefaction and other paleoseimic techniques. We hypothesize that these speleothems were initiated by earthquake-induced opening/closing of fracture-controlled flowpaths in the ceilings of cave passages.
NASA Astrophysics Data System (ADS)
Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter
2017-08-01
We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high-σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M */L using lensing and stellar dynamics. We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy & van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2-3σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M ⊙. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2σ. There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched-σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses (m ≲ 0.3 M ⊙), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M */L relative to the mean matched-σ ETG. We provide the spectra used in this study to facilitate future comparisons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie
2017-08-20
We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high- σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M {sub *}/ L using lensing and stellar dynamics.more » We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy and van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2–3 σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M {sub ⊙}. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2 σ . There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched- σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses ( m ≲ 0.3 M {sub ⊙}), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M {sub *}/ L relative to the mean matched- σ ETG. We provide the spectra used in this study to facilitate future comparisons.« less
NASA Astrophysics Data System (ADS)
Clark, E.; Wood, A.; Nijssen, B.; Newman, A. J.; Mendoza, P. A.
2016-12-01
The System for Hydrometeorological Applications, Research and Prediction (SHARP), developed at the National Center for Atmospheric Research (NCAR), University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation, is a fully automated ensemble prediction system for short-term to seasonal applications. It incorporates uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 plausible temperature and precipitation time series through the Sacramento/Snow-17 model. The forcing ensemble explicitly accounts for measurement and interpolation uncertainties in the development of gridded meteorological forcing time series. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. To select the IHCs that are most consistent with the observations, we employ a particle filter (PF) that weights IHC ensemble members based on observations of streamflow and SWE. These particles are then used to initialize ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS), generating a streamflow forecast ensemble. We test this method in two basins in the Pacific Northwest that are important for water resources management: 1) the Green River upstream of Howard Hanson Dam, and 2) the South Fork Flathead River upstream of Hungry Horse Dam. The first of these is characterized by mixed snow and rain, while the second is snow-dominated. The PF-based forecasts are compared to forecasts based on a single IHC (corresponding to median streamflow) paired with the full GEFS ensemble, and 2) the full IHC ensemble, without filtering, paired with the full GEFS ensemble. In addition to assessing improvements in the spread of IHCs, we perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts at 1- to 7-day lead times.
AGM-88E Advanced Anti-Radiation Guided Missile (AGM-88E AARGM)
2015-12-01
0.0 0.0 Acq O&M 0.0 0.0 -- 0.0 0.0 0.0 0.0 Total 1528.5 1661.1 N/A 2107.4 1861.4 2026.2 2663.7 1 APB Breach Confidence Level Confidence Level of...normal conditions, encountering average levels of technical, schedule, and programmatic risk and external interference. Based on the rigor in methods...SAR Baseline to Current SAR Baseline (TY $M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Multitarget mixture reduction algorithm with incorporated target existence recursions
NASA Astrophysics Data System (ADS)
Ristic, Branko; Arulampalam, Sanjeev
2000-07-01
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Hyper-X Post-Flight Trajectory Reconstruction
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Tartabini, Paul V.; Blanchard, RobertC.; Kirsch, Michael; Toniolo, Matthew D.
2004-01-01
This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X{43A/Hyper{X high speed research vehicle, and its implementation for the reconstruction and analysis of ight test data. Extended Kalman ltering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the ltering routines. Additionally, smoothing algorithms have been implemented in which the nal value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from ight data.
A new data assimilation engine for physics-based thermospheric density models
NASA Astrophysics Data System (ADS)
Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.
2017-12-01
The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.
Adaptive AOA-aided TOA self-positioning for mobile wireless sensor networks.
Wen, Chih-Yu; Chan, Fu-Kai
2010-01-01
Location-awareness is crucial and becoming increasingly important to many applications in wireless sensor networks. This paper presents a network-based positioning system and outlines recent work in which we have developed an efficient principled approach to localize a mobile sensor using time of arrival (TOA) and angle of arrival (AOA) information employing multiple seeds in the line-of-sight scenario. By receiving the periodic broadcasts from the seeds, the mobile target sensors can obtain adequate observations and localize themselves automatically. The proposed positioning scheme performs location estimation in three phases: (I) AOA-aided TOA measurement, (II) Geometrical positioning with particle filter, and (III) Adaptive fuzzy control. Based on the distance measurements and the initial position estimate, adaptive fuzzy control scheme is applied to solve the localization adjustment problem. The simulations show that the proposed approach provides adaptive flexibility and robust improvement in position estimation.
An expert system for diagnostics and estimation of steam turbine components condition
NASA Astrophysics Data System (ADS)
Murmansky, B. E.; Aronson, K. E.; Brodov, Yu. M.
2017-11-01
The report describes an expert system of probability type for diagnostics and state estimation of steam turbine technological subsystems components. The expert system is based on Bayes’ theorem and permits to troubleshoot the equipment components, using expert experience, when there is a lack of baseline information on the indicators of turbine operation. Within a unified approach the expert system solves the problems of diagnosing the flow steam path of the turbine, bearings, thermal expansion system, regulatory system, condensing unit, the systems of regenerative feed-water and hot water heating. The knowledge base of the expert system for turbine unit rotors and bearings contains a description of 34 defects and of 104 related diagnostic features that cause a change in its vibration state. The knowledge base for the condensing unit contains 12 hypotheses and 15 evidence (indications); the procedures are also designated for 20 state parameters estimation. Similar knowledge base containing the diagnostic features and faults hypotheses are formulated for other technological subsystems of turbine unit. With the necessary initial information available a number of problems can be solved within the expert system for various technological subsystems of steam turbine unit: for steam flow path it is the correlation and regression analysis of multifactor relationship between the vibration parameters variations and the regime parameters; for system of thermal expansions it is the evaluation of force acting on the longitudinal keys depending on the temperature state of the turbine cylinder; for condensing unit it is the evaluation of separate effect of the heat exchange surface contamination and of the presence of air in condenser steam space on condenser thermal efficiency performance, as well as the evaluation of term for condenser cleaning and for tube system replacement and so forth. With a lack of initial information the expert system enables to formulate a diagnosis, calculating the probability of faults hypotheses, given the degree of the expert confidence in estimation of turbine components operation parameters.
NASA Astrophysics Data System (ADS)
González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Catalan, P. A.; Cienfuegos, R.; Urrutia, A.; Shrivastava, M. N.; Yagi, Y.; Moreno, M.
2015-12-01
Tsunami inundation maps are a powerful tool to design evacuation plans of coastal communities, additionally can be used as a guide to territorial planning and assessment of structural damages in port facilities and critical infrastructure (Borrero et al., 2003; Barberopoulou et al., 2011; Power et al., 2012; Mueller et al., 2015). The accuracy of inundation estimation is highly correlated with tsunami initial conditions, e.g. seafloor vertical deformation, displaced water volume and potential energy (Bolshakova et al., 2011). Usually, the initial conditions are estimated using homogeneous rupture models based in historical worst-case scenario. However tsunamigenic events occurred in central Chilean continental margin showed a heterogeneous slip distribution of source with patches of high slip, correlated with fully-coupled interseismic zones (Moreno et al., 2012). The main objective of this work is to evaluate the predictive capacity of interseismic coupling models based on geodetic data comparing them with homogeneous fault slip model constructed using scaling laws (Blaser et al., 2010) to estimate inundation and runup in coastal areas. To test our hypothesis we select a seismic gap of Maule, where occurred the last large tsunamigenic earthquake in the chilean subduction zone, using the interseismic coupling models (ISC) proposed by Moreno et al., 2011 and Métois et al., 2013. We generate a slip deficit distribution to build a tsunami source supported by geological information such as slab depth (Hayes et al., 2012), strike, rake and dip (Dziewonski et al., 1981; Ekström et al., 2012) to model tsunami generation, propagation and shoreline impact using Neowave 2D (Yamazaki et al., 2009). We compare the tsunami scenario of Mw 8.8, Maule based in coseismic slip distribution proposed by Moreno et al., 2012 with homogeneous and heterogeneous models to identify the accuracy of our results with sea level time series and regional runup data (Figure 1). The estimation of tsunami source using ISC model can be useful to improve the analysis of tsunami threat, based in more realistic slip distribution.
Guidance, steering, load relief and control of an asymmetric launch vehicle. M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Boelitz, Frederick W.
1989-01-01
A new guidance, steering, and control concept is described and evaluated for the Third Phase of an asymmetrical configuration of the Advanced Launch System (ALS). The study also includes the consideration of trajectory shaping issues and trajectory design as well as the development of angular rate, angular acceleration, angle of attack, and dynamic pressure estimators. The Third Phase guidance, steering and control system is based on controlling the acceleration-direction of the vehicle after an initial launch maneuver. Unlike traditional concepts, the alignment of the estimated and commanded acceleration-directions is unimpaired by an add-on load relief. Instead, the acceleration-direction steering-control system features a control override that limits the product of estimated dynamic pressure and estimated angle of attack. When this product is not being limited, control is based exclusively on the commanded acceleration-direction without load relief. During limiting, control is based on nulling the error between the limited angle of attack and the estimated angle of attack. This limiting feature provides full freedom to the acceleration-direction steering and control to shape the trajectory within the limit, and also gives full priority to the limiting of angle of attack when necessary. The flight software concepts were analyzed on the basis of their effects on pitch plane motion.
Very High Cycle Fatigue Behavior of a Directionally Solidified Ni-Base Superalloy DZ4
Nie, Baohua; Zhao, Zihua; Liu, Shu; Chen, Dongchu; Ouyang, Yongzhong; Hu, Zhudong; Fan, Touwen; Sun, Haibo
2018-01-01
The effect of casting pores on the very high cycle fatigue (VHCF) behavior of a directionally solidified (DS) Ni-base superalloy DZ4 is investigated. Casting and hot isostatic pressing (HIP) specimens were subjected to very high cycle fatigue loading in an ambient atmosphere. The results demonstrated that the continuously descending S-N curves were exhibited for both the casting and HIP specimens. Due to the elimination of the casting pores, the HIP samples had better fatigue properties than the casting samples. The subsurface crack initiated from the casting pore in the casting specimens at low stress amplitudes, whereas fatigue crack initiated from crystallographic facet decohesion for the HIP specimens. When considering the casting pores as initial cracks, there exists a critical stress intensity threshold ranged from 1.1 to 1.3 MPam, below which fatigue cracks may not initiate from the casting pores. Furthermore, the effect of the casting pores on the fatigue limit is estimated based on a modified El Haddad model, which is in good agreement with the experimental results. Fatigue life for both the casting and HIP specimens is well predicted using the Fatigue Indicator Parameter (FIP) model. PMID:29320429
Blow-up for a three dimensional Keller-Segel model with consumption of chemoattractant
NASA Astrophysics Data System (ADS)
Jiang, Jie; Wu, Hao; Zheng, Songmu
2018-04-01
We investigate blow-up properties for the initial-boundary value problem of a Keller-Segel model with consumption of chemoattractant when the spatial dimension is three. Through a kinetic reformulation of the Keller-Segel system, we first derive some higher-order estimates and obtain certain blow-up criteria for the local classical solutions. These blow-up criteria generalize the results in [4,5] from the whole space R3 to the case of bounded smooth domain Ω ⊂R3. Lower global blow-up estimate on ‖ n ‖ L∞ (Ω) is also obtained based on our higher-order estimates. Moreover, we prove local non-degeneracy for blow-up points.
Percutaneous Trigger Finger Release: A Cost-effectiveness Analysis.
Gancarczyk, Stephanie M; Jang, Eugene S; Swart, Eric P; Makhni, Eric C; Kadiyala, Rajendra Kumar
2016-07-01
Percutaneous trigger finger releases (TFRs) performed in the office setting are becoming more prevalent. This study compares the costs of in-hospital open TFRs, open TFRs performed in ambulatory surgical centers (ASCs), and in-office percutaneous releases. An expected-value decision-analysis model was constructed from the payer perspective to estimate total costs of the three competing treatment strategies for TFR. Model parameters were estimated based on the best available literature and were tested using multiway sensitivity analysis. Percutaneous TFR performed in the office and then, if needed, revised open TFR performed in the ASC, was the most cost-effective strategy, with an attributed cost of $603. The cost associated with an initial open TFR performed in the ASC was approximately 7% higher. Initial open TFR performed in the hospital was the least cost-effective, with an attributed cost nearly twice that of primary percutaneous TFR. An initial attempt at percutaneous TFR is more cost-effective than an open TFR. Currently, only about 5% of TFRs are performed in the office; therefore, a substantial opportunity exists for cost savings in the future. Decision model level II.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining
2017-12-01
We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.
Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation
NASA Astrophysics Data System (ADS)
Fard, Mani B.; Bayazit, Ulug
2014-01-01
In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.
Increased suicide risk and clinical correlates of suicide among patients with Parkinson's disease.
Lee, Taeyeop; Lee, Hochang Benjamin; Ahn, Myung Hee; Kim, Juyeon; Kim, Mi Sun; Chung, Sun Ju; Hong, Jin Pyo
2016-11-01
Parkinson's disease (PD) is a debilitating, neurodegenerative condition frequently complicated by psychiatric symptoms. Patients with PD may be at higher risk for suicide than the general population, but previous estimates are limited and conflicting. The aim of this study is to estimate the suicide rate based on the clinical case registry and to identify risk factors for suicide among patients diagnosed with PD. The target sample consisted of 4362 patients diagnosed with PD who were evaluated at a general hospital in Seoul, South Korea, from 1996 to 2012. The standardized mortality ratio for suicide among PD patients was estimated. In order to identify the clinical correlates of suicide, case-control study was conducted based on retrospective chart review. The 29 suicide cases (age: 62.3 ± 13.7 years; females: 34.5%) were matched with 116 non-suicide controls (age: 63.5 ± 9.2 years; females 56.9%) by the year of initial PD evaluation. The SMR for suicide in PD patients was 1.99 (95% CI 1.33-2.85). Mean duration from time of initial diagnosis to suicide among cases was 6.1 ± 3.5 years. Case-control analysis revealed that male, initial extremity of motor symptom onset, history of depressive disorder, delusion, any psychiatric disorder, and higher L-dopa dosage were significantly associated with suicide among PD patients. Other PD-related variables such as UPDRS motor score were not significantly associated with death by suicide. Suicide risk in PD patients is approximately 2 times higher than that in the general population. Psychiatric disorders, and also L-dopa medication need further attention with respect to suicide. Copyright © 2016 Elsevier Ltd. All rights reserved.
Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan
2015-01-01
To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
Cook, Troy A.
2013-01-01
Estimated ultimate recoveries (EURs) are a key component in determining productivity of wells in continuous-type oil and gas reservoirs. EURs form the foundation of a well-performance-based assessment methodology initially developed by the U.S. Geological Survey (USGS; Schmoker, 1999). This methodology was formally reviewed by the American Association of Petroleum Geologists Committee on Resource Evaluation (Curtis and others, 2001). The EUR estimation methodology described in this paper was used in the 2013 USGS assessment of continuous oil resources in the Bakken and Three Forks Formations and incorporates uncertainties that would not normally be included in a basic decline-curve calculation. These uncertainties relate to (1) the mean time before failure of the entire well-production system (excluding economics), (2) the uncertainty of when (and if) a stable hyperbolic-decline profile is revealed in the production data, (3) the particular formation involved, (4) relations between initial production rates and a stable hyperbolic-decline profile, and (5) the final behavior of the decline extrapolation as production becomes more dependent on matrix storage.
TRMM- and GPM-based precipitation analysis and modelling in the Tropical Andes
NASA Astrophysics Data System (ADS)
Manz, Bastian; Buytaert, Wouter; Zulkafli, Zed; Onof, Christian
2016-04-01
Despite wide-spread applications of satellite-based precipitation products (SPPs) throughout the TRMM-era, the scarcity of ground-based in-situ data (high density gauge networks, rainfall radar) in many hydro-meteorologically important regions, such as tropical mountain environments, has limited our ability to evaluate both SPPs and individual satellite-based sensors as well as accurately model or merge rainfall at high spatial resolutions, particularly with respect to extremes. This has restricted both the understanding of sensor behaviour and performance controls in such regions as well as the accuracy of precipitation estimates and respective hydrological applications ranging from water resources management to early warning systems. Here we report on our recent research into precipitation analysis and modelling using various TRMM and GPM products (2A25, 3B42 and IMERG) in the tropical Andes. In an initial study, 78 high-frequency (10-min) recording gauges in Colombia and Ecuador are used to generate a ground-based validation dataset for evaluation of instantaneous TRMM Precipitation Radar (TPR) overpasses from the 2A25 product. Detection ability, precipitation time-series, empirical distributions and statistical moments are evaluated with respect to regional climatological differences, seasonal behaviour, rainfall types and detection thresholds. Results confirmed previous findings from extra-tropical regions of over-estimation of low rainfall intensities and under-estimation of the highest 10% of rainfall intensities by the TPR. However, in spite of evident regionalised performance differences as a function of local climatological regimes, the TPR provides an accurate estimate of climatological annual and seasonal rainfall means. On this basis, high-resolution (5 km) climatological maps are derived for the entire tropical Andes. The second objective of this work is to improve the local precipitation estimation accuracy and representation of spatial patterns of extreme rainfall probabilities over the region. For this purpose, an ensemble of high-resolution rainfall fields is generated by stochastic simulation using space-time averaged, coarse-scale (daily, 0.25°) satellite-based rainfall inputs (TRMM 3B42/ -RT) and the high-resolution climatological information derived from the TPR as spatial disaggregation proxies. For evaluation and merging, gridded ground-based rainfall fields are generated from gauge data using sequential simulation. Satellite and ground-based ensembles are subsequently merged using an inverse error weighting scheme. The model was tested over a case study in the Colombian Andes with optional coarse-scale bias correction prior to disaggregation and merging. The resulting outputs were assessed in the context of Generalized Extreme Value theory and showed improved estimation of extreme rainfall probabilities compared to the original TMPA inputs. Initial findings using GPM-IMERG inputs are also presented.
Creating targeted initial populations for genetic product searches in heterogeneous markets
NASA Astrophysics Data System (ADS)
Foster, Garrett; Turner, Callaway; Ferguson, Scott; Donndelinger, Joseph
2014-12-01
Genetic searches often use randomly generated initial populations to maximize diversity and enable a thorough sampling of the design space. While many of these initial configurations perform poorly, the trade-off between population diversity and solution quality is typically acceptable for small-scale problems. Navigating complex design spaces, however, often requires computationally intelligent approaches that improve solution quality. This article draws on research advances in market-based product design and heuristic optimization to strategically construct 'targeted' initial populations. Targeted initial designs are created using respondent-level part-worths estimated from discrete choice models. These designs are then integrated into a traditional genetic search. Two case study problems of differing complexity are presented to illustrate the benefits of this approach. In both problems, targeted populations lead to computational savings and product configurations with improved market share of preferences. Future research efforts to tailor this approach and extend it towards multiple objectives are also discussed.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jufeng; Xia, Bing; Shang, Yunlong
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...
2016-12-22
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Reef fish communities are spooked by scuba surveys and may take hours to recover
Cheal, Alistair J.; Miller, Ian R.
2018-01-01
Ecological monitoring programs typically aim to detect changes in the abundance of species of conservation concern or which reflect system status. Coral reef fish assemblages are functionally important for reef health and these are most commonly monitored using underwater visual surveys (UVS) by divers. In addition to estimating numbers, most programs also collect estimates of fish lengths to allow calculation of biomass, an important determinant of a fish’s functional impact. However, diver surveys may be biased because fishes may either avoid or are attracted to divers and the process of estimating fish length could result in fish counts that differ from those made without length estimations. Here we investigated whether (1) general diver disturbance and (2) the additional task of estimating fish lengths affected estimates of reef fish abundance and species richness during UVS, and for how long. Initial estimates of abundance and species richness were significantly higher than those made on the same section of reef after diver disturbance. However, there was no evidence that estimating fish lengths at the same time as abundance resulted in counts different from those made when estimating abundance alone. Similarly, there was little consistent bias among observers. Estimates of the time for fish taxa that avoided divers after initial contact to return to initial levels of abundance varied from three to 17 h, with one group of exploited fishes showing initial attraction to divers that declined over the study period. Our finding that many reef fishes may disperse for such long periods after initial contact with divers suggests that monitoring programs should take great care to minimise diver disturbance prior to surveys. PMID:29844998
Microfluidics for simultaneous quantification of platelet adhesion and blood viscosity
Yeom, Eunseop; Park, Jun Hong; Kang, Yang Jun; Lee, Sang Joon
2016-01-01
Platelet functions, including adhesion, activation, and aggregation have an influence on thrombosis and the progression of atherosclerosis. In the present study, a new microfluidic-based method is proposed to estimate platelet adhesion and blood viscosity simultaneously. Blood sample flows into an H-shaped microfluidic device with a peristaltic pump. Since platelet aggregation may be initiated by the compression of rotors inside the peristaltic pump, platelet aggregates may adhere to the H-shaped channel. Through correlation mapping, which visualizes decorrelation of the streaming blood flow, the area of adhered platelets (APlatelet) can be estimated without labeling platelets. The platelet function is estimated by determining the representative index IA·T based on APlatelet and contact time. Blood viscosity is measured by monitoring the flow conditions in the one side channel of the H-shaped device. Based on the relation between interfacial width (W) and pressure ratio of sample flows to the reference, blood sample viscosity (μ) can be estimated by measuring W. Biophysical parameters (IA·T, μ) are compared for normal and diabetic rats using an ex vivo extracorporeal model. This microfluidic-based method can be used for evaluating variations in the platelet adhesion and blood viscosity of animal models with cardiovascular diseases under ex vivo conditions. PMID:27118101
Cota-Ruiz, Juan; Rosiles, Jose-Gerardo; Sifuentes, Ernesto; Rivas-Perea, Pablo
2012-01-01
This research presents a distributed and formula-based bilateration algorithm that can be used to provide initial set of locations. In this scheme each node uses distance estimates to anchors to solve a set of circle-circle intersection (CCI) problems, solved through a purely geometric formulation. The resulting CCIs are processed to pick those that cluster together and then take the average to produce an initial node location. The algorithm is compared in terms of accuracy and computational complexity with a Least-Squares localization algorithm, based on the Levenberg-Marquardt methodology. Results in accuracy vs. computational performance show that the bilateration algorithm is competitive compared with well known optimized localization algorithms.
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
Attenuating Stereo Pixel-Locking via Affine Window Adaptation
NASA Technical Reports Server (NTRS)
Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.
2006-01-01
For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Simplified data reduction methods for the ECT test for mode 3 interlaminar fracture toughness
NASA Technical Reports Server (NTRS)
Li, Jian; Obrien, T. Kevin
1995-01-01
Simplified expressions for the parameter controlling the load point compliance and strain energy release rate were obtained for the Edge Crack Torsion (ECT) specimen for mode 3 interlaminar fracture toughness. Data reduction methods for mode 3 toughness based on the present analysis are proposed. The effect of the transverse shear modulus, G(sub 23), on mode 3 interlaminar fracture toughness characterization was evaluated. Parameters influenced by the transverse shear modulus were identified. Analytical results indicate that a higher value of G(sub 23) results in a low load point compliance and lower mode 3 toughness estimation. The effect of G(sub 23) on the mode 3 toughness using the ECT specimen is negligible when an appropriate initial delamination length is chosen. A conservative estimation of mode 3 toughness can be obtained by assuming G(sub 23) = G(sub 12) for any initial delamination length.
Yiannoutsos, Constantin Theodore; Johnson, Leigh Francis; Boulle, Andrew; Musick, Beverly Sue; Gsponer, Thomas; Balestre, Eric; Law, Matthew; Shepherd, Bryan E; Egger, Matthias
2012-01-01
Objective To provide estimates of mortality among HIV-infected patients starting combination antiretroviral therapy. Methods We report on the death rates from 122 925 adult HIV-infected patients aged 15 years or older from East, Southern and West Africa, Asia Pacific and Latin America. We use two methods to adjust for biases in mortality estimation resulting from loss from follow-up, based on double-sampling methods applied to patient outreach (Kenya) and linkage with vital registries (South Africa), and apply these to mortality estimates in the other three regions. Age, gender and CD4 count at the initiation of therapy were the factors considered as predictors of mortality at 6, 12, 24 and >24 months after the start of treatment. Results Patient mortality was high during the first 6 months after therapy for all patient subgroups and exceeded 40 per 100 patient years among patients who started treatment at low CD4 count. This trend was seen regardless of region, demographic or disease-related risk factor. Mortality was under-reported by up to or exceeding 100% when comparing estimates obtained from passive monitoring of patient vital status. Conclusions Despite advances in antiretroviral treatment coverage many patients start treatment at very low CD4 counts and experience significant mortality during the first 6 months after treatment initiation. Active patient tracing and linkage with vital registries are critical in adjusting estimates of mortality, particularly in low- and middle-income settings. PMID:23172344
A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.
Rodrigo, Marianito R
2016-01-01
The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.
A 2D eye gaze estimation system with low-resolution webcam images
NASA Astrophysics Data System (ADS)
Ince, Ibrahim Furkan; Kim, Jin Woo
2011-12-01
In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.
Schomaker, Michael; Davies, Mary-Ann; Malateste, Karen; Renner, Lorna; Sawry, Shobna; N’Gbeche, Sylvie; Technau, Karl-Günter; Eboua, François; Tanser, Frank; Sygnaté-Sy, Haby; Phiri, Sam; Amorissani-Folquet, Madeleine; Cox, Vivian; Koueta, Fla; Chimbete, Cleophas; Lawson-Evi, Annette; Giddy, Janet; Amani-Bosse, Clarisse; Wood, Robin; Egger, Matthias; Leroy, Valeriane
2017-01-01
Background There is limited evidence regarding the optimal timing of initiating antiretroviral therapy (ART) in children. We conducted a causal modelling analysis in children aged 1–5 years from the International Epidemiologic Databases to Evaluate AIDS West/Southern-Africa collaboration to determine growth and mortality differences related to different CD4-based treatment initiation criteria, age groups and regions. Methods ART-naïve children of age 12–59 months at enrollment with at least one visit before ART initiation and one follow-up visit were included. We estimated 3-year growth and cumulative mortality from the start of follow-up for different CD4 criteria using g-computation. Results About one quarter of the 5826 included children was from West Africa (24.6%). The median (first; third quartile) CD4% at the first visit was 16% (11%;23%), the median weight-for-age z-scores and height-for-age z-scores were −1.5 (−2.7; −0.6) and −2.5 (−3.5; −1.5), respectively. Estimated cumulative mortality was higher overall, and growth was slower, when initiating ART at lower CD4 thresholds. After 3 years of follow-up, the estimated mortality difference between starting ART routinely irrespective of CD4 count and starting ART if either CD4 count<750 cells/mm3 or CD4%<25% was 0.2% (95%CI: −0.2%;0.3%), and the difference in the mean height-for-age z-scores of those who survived was −0.02 (95%CI: −0.04;0.01). Younger children aged 1–2 and children in West Africa had worse outcomes. Conclusions Our results demonstrate that earlier treatment initiation yields overall better growth and mortality outcomes, though we could not show any differences in outcomes between immediate ART and delaying until CD4 count/% falls below750/25%. PMID:26479876
Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision
Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao
2015-01-01
In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863
Satellite Angular Rate Estimation From Vector Measurements
NASA Technical Reports Server (NTRS)
Azor, Ruth; Bar-Itzhack, Itzhack Y.; Harman, Richard R.
1996-01-01
This paper presents an algorithm for estimating the angular rate vector of a satellite which is based on the time derivatives of vector measurements expressed in a reference and body coordinate. The computed derivatives are fed into a spacial Kalman filter which yields an estimate of the spacecraft angular velocity. The filter, named Extended Interlaced Kalman Filter (EIKF), is an extension of the Kalman filter which, although being linear, estimates the state of a nonlinear dynamic system. It consists of two or three parallel Kalman filters whose individual estimates are fed to one another and are considered as known inputs by the other parallel filter(s). The nonlinear dynamics stem from the nonlinear differential equation that describes the rotation of a three dimensional body. Initial results, using simulated data, and real Rossi X ray Timing Explorer (RXTE) data indicate that the algorithm is efficient and robust.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
Huang, Xingyue; Beresford, Eric; Lodise, Thomas; Friedland, H David
2013-06-15
The budgetary impact of adding ceftaroline fosamil to a hospital formulary for the treatment of acute bacterial skin and skin structure infections (ABSSSIs) was evaluated. A three-year hospital budget impact model was constructed with three initial treatment options for ABSSSIs: ceftaroline fosamil, vancomycin plus aztreonam, and other vancomycin-containing regimens. The target population was hospitalized adult patients with an ABSSSI. Clinical cure rates with initial treatment were assumed to be similar to those from ceftaroline fosamil clinical trials. Patients who did not respond to initial treatment were assumed to be treated successfully with second-line antimicrobial therapy. Length of stay and cost per hospital day (by success or failure with initial treatment) were estimated based on a large database from more than 100 U.S. hospitals. Other model inputs included the annual number of ABSSSI admissions, projected annual case growth rate, proportion of ABSSSI target population receiving vancomycin-containing regimen, expected proportion of ABSSSI target population to be treated with ceftaroline fosamil, drug acquisition cost, cost of antibiotic administration, and cost of vancomycin monitoring. Sensitivity analysis using 95% confidence limits of clinical cure rates was also performed. The estimated total cost of care for treating a patient with an ABSSSI was $395 lower with ceftaroline fosamil ($15,087 versus $15,482) compared with vancomycin plus aztreonam and $72 lower ($15,087 versus $15,159) compared with other vancomycin-containing regimens. Model estimates indicated that adding ceftaroline fosamil to the hospital formulary would not have a negative effect on a hospital's budget for ABSSSI treatment.
Cerebrospinal fluid neopterin decay characteristics after initiation of antiretroviral therapy.
Yilmaz, Aylin; Yiannoutsos, Constantin T; Fuchs, Dietmar; Price, Richard W; Crozier, Kathryn; Hagberg, Lars; Spudich, Serena; Gisslén, Magnus
2013-05-10
Neopterin, a biomarker of macrophage activation, is elevated in the cerebrospinal fluid (CSF) of most HIV-infected individuals and decreases after initiation of antiretroviral therapy (ART). We studied decay characteristics of neopterin in CSF and blood after commencement of ART in HIV-infected subjects and estimated the set-point levels of CSF neopterin after ART-mediated viral suppression. CSF and blood neopterin were longitudinally measured in 102 neurologically asymptomatic HIV-infected subjects who were treatment-naïve or had been off ART for ≥ 6 months. We used a non-linear model to estimate neopterin decay in response to ART and a stable neopterin set-point attained after prolonged ART. Seven subjects with HIV-associated dementia (HAD) who initiated ART were studied for comparison. Non-HAD patients were followed for a median 84.7 months. Though CSF neopterin concentrations decreased rapidly after ART initiation, it was estimated that set-point levels would be below normal CSF neopterin levels (<5.8 nmol/L) in only 60/102 (59%) of these patients. Pre-ART CSF neopterin was the primary predictor of set-point (P <0.001). HAD subjects had higher baseline median CSF neopterin levels than non-HAD subjects (P <0.0001). Based on the non-HAD model, only 14% of HAD patients were predicted to reach normal levels. After virologically suppressive ART, abnormal CSF neopterin levels persisted in 41% of non-HAD and the majority of HAD patients. ART is not fully effective in ameliorating macrophage activation in CNS as well as blood, especially in subjects with higher pre-ART levels of immune activation.
2012-09-30
order to understand its role in transporting moisture into the upper troposphere and effect on the initiation and propagation phases of the Madden...estimates of cloud base from ceilometer. The gray lines are composted insolation measurements to indicate day vs night conditions.
Katherine Sinacore; Jefferson Scott Hall; Catherine Potvin; Alejandro A. Royo; Mark J. Ducey; Mark S. Ashton; Shijo Joseph
2017-01-01
The potential benefits of planting trees have generated significant interest with respect to sequestering carbon and restoring other forest based ecosystem services. Reliable estimates of carbon stocks are pivotal for understanding the global carbon balance and for promoting initiatives to mitigate CO2 emissions through forest management. There...
Costs of fire suppression forces based on cost-aggregation approach
Gonz& aacute; lez-Cab& aacute; Armando n; Charles W. McKetta; Thomas J. Mills
1984-01-01
A cost-aggregation approach has been developed for determining the cost of Fire Management Inputs (FMls)-the direct fireline production units (personnel and equipment) used in initial attack and large-fire suppression activities. All components contributing to an FMI are identified, computed, and summed to estimate hourly costs. This approach can be applied to any FMI...
Comparison of different stomatal conductance algorithms for ozone flux modelling [Proceedings
P. Buker; L. D. Emberson; M. R. Ashmore; G. Gerosa; C. Jacobs; W. J. Massman; J. Muller; N. Nikolov; K. Novak; E. Oksanen; D. De La Torre; J. -P. Tuovinen
2006-01-01
The ozone deposition model (D03SE) that has been developed and applied within the EMEP photooxidant model (Emberson et al., 2000, Simpson et al. 2003) currently estimates stomatal ozone flux using a stomatal conductance (gs) model based on the multiplicative algorithm initially developed by Jarvis (1976). This model links gs to environmental and phenological parameters...
NASA Astrophysics Data System (ADS)
Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.
2017-03-01
Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.
Automatic characterization of sleep need dissipation dynamics using a single EEG signal.
Garcia-Molina, Gary; Bellesi, Michele; Riedner, Brady; Pastoor, Sander; Pfundtner, Stefan; Tononi, Giulio
2015-01-01
In the two-process model of sleep regulation, slow-wave activity (SWA, i.e. the EEG power in the 0.5-4 Hz frequency band) is considered a direct indicator of sleep need. SWA builds up during non-rapid eye movement (NREM) sleep, declines before the onset of rapid-eye-movement (REM) sleep, remains low during REM and the level of increase in successive NREM episodes gets progressively lower. Sleep need dissipates with a speed that is proportional to SWA and can be characterized in terms of the initial sleep need, and the decay rate. The goal in this paper is to automatically characterize sleep need from a single EEG signal acquired at a frontal location. To achieve this, a highly specific and reasonably sensitive NREM detection algorithm is proposed that leverages the concept of a single-class Kernel-based classifier. Using automatic NREM detection, we propose a method to estimate the decay rate and the initial sleep need. This method was tested on experimental data from 8 subjects who recorded EEG during three nights at home. We found that on average the estimates of the decay rate and the initial sleep need have higher values when automatic NREM detection was used as compared to manual NREM annotation. However, the average variability of these estimates across multiple nights of the same subject was lower when the automatic NREM detection classifier was used. While this method slightly over estimates the sleep need parameters, the reduced variability across subjects makes it more effective for within subject statistical comparisons of a given sleep intervention.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
Soares Magalhães, Ricardo J; Pfeiffer, Dirk U; Otte, Joachim
2010-06-05
Currently, the highly pathogenic avian influenza virus (HPAIV) of the subtype H5N1 is believed to have reached an endemic cycle in Vietnam. We used routine surveillance data on HPAIV H5N1 poultry outbreaks in Vietnam to estimate and compare the within-flock reproductive number of infection (R0) for periods before (second epidemic wave, 2004-5; depopulation-based disease control) and during (fourth epidemic wave, beginning 2007; vaccination-based disease control) vaccination. Our results show that infected premises (IPs) in the initial (exponential) phases of outbreak periods have the highest R0 estimates. The IPs reported during the outbreak period when depopulation-based disease control was implemented had higher R0 estimates than IPs reported during the outbreak period when vaccination-based disease control was used. In the latter period, in some flocks of a defined size and species composition, within-flock transmission estimates were not significantly below the threshold for transmission (R0 < 1). Our results indicate that the current control policy based on depopulation plus vaccination has protected the majority of poultry flocks against infection. However, in some flocks the determinants associated with suboptimal protection need to be further investigated as these may explain the current pattern of infection in animal and human populations.
Williams, Paige L; Seage, George R; Van Dyke, Russell B; Siberry, George K; Griner, Raymond; Tassiopoulos, Katherine; Yildirim, Cenk; Read, Jennifer S; Huo, Yanling; Hazra, Rohan; Jacobson, Denise L; Mofenson, Lynne M; Rich, Kenneth
2012-05-01
The Pediatric HIV/AIDS Cohort Study's Surveillance Monitoring of ART Toxicities Study is a prospective cohort study conducted at 22 US sites between 2007 and 2011 that was designed to evaluate the safety of in utero antiretroviral drug exposure in children not infected with human immunodeficiency virus who were born to mothers who were infected. This ongoing study uses a "trigger-based" design; that is, initial assessments are conducted on all children, and only those meeting certain thresholds or "triggers" undergo more intensive evaluations to determine whether they have had an adverse event (AE). The authors present the estimated rates of AEs for each domain of interest in the Surveillance Monitoring of ART Toxicities Study. They also evaluated the efficiency of this trigger-based design for estimating AE rates and for testing associations between in utero exposures to antiretroviral drugs and AEs. The authors demonstrate that estimated AE rates from the trigger-based design are unbiased after correction for the sensitivity of the trigger for identifying AEs. Even without correcting for bias based on trigger sensitivity, the trigger approach is generally more efficient for estimating AE rates than is evaluating a random sample of the same size. Minor losses in efficiency when comparing AE rates between persons exposed and unexposed in utero to particular antiretroviral drugs or drug classes were observed under most scenarios.
Sheahan, Anna; Feinstein, Lydia; Dube, Queen; Edmonds, Andrew; Chirambo, Chawanangwa Mahebere; Smith, Emily; Behets, Frieda; Heyderman, Robert; Van Rie, Annelies
2017-07-01
Based on clinical trial results, the World Health Organization recommends infant HIV testing at age 4-6 weeks and immediate antiretroviral therapy (ART) initiation in all HIV-infected infants. Little is known about the outcomes of HIV-infected infants diagnosed with HIV in the first weeks of life in resource-limited settings. We assessed ART initiation and mortality in the first year of life among infants diagnosed with HIV by 12 weeks of age. Cohort of HIV-infected infants in Kinshasa and Blantyre diagnosed before 12 weeks to estimate 12-month cumulative incidences of ART initiation and mortality, accounting for competing risks. Multivariate models were used to estimate associations between infant characteristics and timing of ART initiation. One hundred and twenty-one infants were diagnosed at a median age of 7 weeks (interquartile range, 6-8). The cumulative incidence of ART initiation was 46% [95% confidence interval (CI), 36%, 55%] at 6 months and 70% (95% CI 60%, 78%) at 12 months. Only age at HIV diagnosis was associated with ART initiation by age 6 months, with a subdistribution hazard ratio of 0.70 (95% CI 0.52, 0.91) for each week increase in age at DNA polymerase chain reaction test. The 12-month cumulative incidence of mortality was 20% (95% CI 13%, 28%). Despite early diagnosis of HIV, ART initiation was slow and mortality remained high, underscoring the complexity in translating clinical trial findings and World Health Organization's guidance into real-life practice. Novel and creative health system interventions will be required to ensure that all HIV-infected infants achieve optimal treatment outcomes under routine care settings.
Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite
NASA Astrophysics Data System (ADS)
Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim
2018-03-01
A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.
Human papillomavirus vaccination in Auckland: reducing ethnic and socioeconomic inequities.
Poole, Tracey; Goodyear-Smith, Felicity; Petousis-Harris, Helen; Desmond, Natalie; Exeter, Daniel; Pointon, Leah; Jayasinha, Ranmalie
2012-12-17
The New Zealand HPV publicly funded immunisation programme commenced in September 2008. Delivery through a school based programme was anticipated to result in higher coverage rates and reduced inequalities compared to vaccination delivered through other settings. The programme provided for on-going vaccination of girls in year 8 with an initial catch-up programme through general practices for young women born after 1 January 1990 until the end of 2010. To assess the uptake of the funded HPV vaccine through school based vaccination programmes in secondary schools and general practices in 2009, and the factors associated with coverage by database matching. Retrospective quantitative analysis of secondary anonymised data School-Based Vaccination Service and National Immunisation Register databases of female students from secondary schools in Auckland District Health Board catchment area. Data included student and school demographic and other variables. Binary logistic regression was used to estimate odds ratios and significance for univariables. Multivariable logistic regression estimated strength of association between individual factors and initiation and completion, adjusted for all other factors. The programme achieved overall coverage of 71.5%, with Pacific girls highest at 88% and Maori at 78%. Girls higher socioeconomic status were more likely be vaccinated in general practice. School-based vaccination service targeted at ethic sub-populations provided equity for the Maori and Pacific student who achieved high levels of vaccination. Copyright © 2012 Elsevier Ltd. All rights reserved.
Increasing the automation of a 2D-3D registration system.
Varnavas, Andreas; Carrell, Tom; Penney, Graeme
2013-02-01
Routine clinical use of 2D-3D registration algorithms for Image Guided Surgery remains limited. A key aspect for routine clinical use of this technology is its degree of automation, i.e., the amount of necessary knowledgeable interaction between the clinicians and the registration system. Current image-based registration approaches usually require knowledgeable manual interaction during two stages: for initial pose estimation and for verification of produced results. We propose four novel techniques, particularly suited to vertebra-based registration systems, which can significantly automate both of the above stages. Two of these techniques are based upon the intraoperative "insertion" of a virtual fiducial marker into the preoperative data. The remaining two techniques use the final registration similarity value between multiple CT vertebrae and a single fluoroscopy vertebra. The proposed methods were evaluated with data from 31 operations (31 CT scans, 419 fluoroscopy images). Results show these methods can remove the need for manual vertebra identification during initial pose estimation, and were also very effective for result verification, producing a combined true positive rate of 100% and false positive rate equal to zero. This large decrease in required knowledgeable interaction is an important contribution aiming to enable more widespread use of 2D-3D registration technology.
Empirical Approach to Understanding the Fatigue Behavior of Metals Made Using Additive Manufacturing
NASA Astrophysics Data System (ADS)
Witkin, David B.; Albright, Thomas V.; Patel, Dhruv N.
2016-08-01
High-cycle fatigue measurements were performed on alloys prepared by powder-bed fusion additive manufacturing techniques. Selective laser melted (SLM) nickel-based superalloy 625 and electron beam melted (EBM) Ti-6Al-4V specimens were prepared as round fatigue specimens and tested with as-built surfaces at stress ratios of -1, 0.1 and 0.5. Data collected at R = -1 were used to construct Goodman diagrams that correspond closely to measured experimental data collected at R > 0. A second way to interpret the HCF data is based on the influence of surface roughness on fatigue, and approximate the surface feature size as a notch. On this basis, the data were interpreted using the fatigue notch factor k f and average stress models relating k f and stress concentration factor K t. The depth and root radius of surface features associated with fatigue crack initiation were used to estimate a K t of 2.8 for SLM 625. For Ti-6Al-4V, a direct estimate of K t from HCF data was not possible, but approximate values of k f based on HCF data and K t from crack initiation site geometry are found to explain other published EBM Ti-6Al-4V.
Rahaman, Khan Rubayet; Kok, Aaron; Hassan, Quazi K.
2017-01-01
The northeastern region of Bangladesh often experiences flash flooding during the pre-harvesting period of the boro rice crop, which is the major cereal crop in the country. In this study, our objective was to delineate the impact of the 2017 flash flood (that initiated on 27 March 2017) on boro rice using multi-temporal Landsat-8 OLI and MODIS data. Initially, we opted to use Landsat-8 OLI data for mapping the damages; however, during and after the flooding event the acquisition of cloud free images were challenging. Thus, we used this data to map the cultivated boro rice acreage considering the planting to mature stages of the crop. Also, in order to map the extent of the damaged boro area, we utilized MODIS data as their 16-day composites provided cloud free information. Our results indicated that both the cultivated and damaged boro area estimates based on satellite data had strong relationships while compared to the ground-based estimates (i.e., r2 values approximately 0.92 for both cases, and RMSE of 18,374 and 9380 ha for cultivated and damaged areas, respectively). Finally, we believe that our study would be critical for planning and ensuring food security for the country. PMID:29036896
Ahmed, M Razu; Rahaman, Khan Rubayet; Kok, Aaron; Hassan, Quazi K
2017-10-14
The northeastern region of Bangladesh often experiences flash flooding during the pre-harvesting period of the boro rice crop, which is the major cereal crop in the country. In this study, our objective was to delineate the impact of the 2017 flash flood (that initiated on 27 March 2017) on boro rice using multi-temporal Landsat-8 OLI and MODIS data. Initially, we opted to use Landsat-8 OLI data for mapping the damages; however, during and after the flooding event the acquisition of cloud free images were challenging. Thus, we used this data to map the cultivated boro rice acreage considering the planting to mature stages of the crop. Also, in order to map the extent of the damaged boro area, we utilized MODIS data as their 16-day composites provided cloud free information. Our results indicated that both the cultivated and damaged boro area estimates based on satellite data had strong relationships while compared to the ground-based estimates (i.e., r ² values approximately 0.92 for both cases, and RMSE of 18,374 and 9380 ha for cultivated and damaged areas, respectively). Finally, we believe that our study would be critical for planning and ensuring food security for the country.
Runoff simulation sensitivity to remotely sensed initial soil water content
NASA Astrophysics Data System (ADS)
Goodrich, D. C.; Schmugge, T. J.; Jackson, T. J.; Unkrich, C. L.; Keefer, T. O.; Parry, R.; Bach, L. B.; Amer, S. A.
1994-05-01
A variety of aircraft remotely sensed and conventional ground-based measurements of volumetric soil water content (SW) were made over two subwatersheds (4.4 and 631 ha) of the U.S. Department of Agriculture's Agricultural Research Service Walnut Gulch experimental watershed during the 1990 monsoon season. Spatially distributed soil water contents estimated remotely from the NASA push broom microwave radiometer (PBMR), an Institute of Radioengineering and Electronics (IRE) multifrequency radiometer, and three ground-based point methods were used to define prestorm initial SW for a distributed rainfall-runoff model (KINEROS; Woolhiser et al., 1990) at a small catchment scale (4.4 ha). At a medium catchment scale (631 ha or 6.31 km2) spatially distributed PBMR SW data were aggregated via stream order reduction. The impacts of the various spatial averages of SW on runoff simulations are discussed and are compared to runoff simulations using SW estimates derived from a simple daily water balance model. It was found that at the small catchment scale the SW data obtained from any of the measurement methods could be used to obtain reasonable runoff predictions. At the medium catchment scale, a basin-wide remotely sensed average of initial water content was sufficient for runoff simulations. This has important implications for the possible use of satellite-based microwave soil moisture data to define prestorm SW because the low spatial resolutions of such sensors may not seriously impact runoff simulations under the conditions examined. However, at both the small and medium basin scale, adequate resources must be devoted to proper definition of the input rainfall to achieve reasonable runoff simulations.
Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm
NASA Astrophysics Data System (ADS)
Sun, Haisheng; Xu, Rui; Chen, Huaping
2018-04-01
To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.
Dodani, Sunjay S; Lu, Charles W; Aldridge, J Wayne; Chou, Kelvin L; Patil, Parag G
2018-06-01
Accurate electrode placement is critical to the success of deep brain stimulation (DBS) surgery. Suboptimal targeting may arise from poor initial target localization, frame-based targeting error, or intraoperative brain shift. These uncertainties can make DBS surgery challenging. To develop a computerized system to guide subthalamic nucleus (STN) DBS electrode localization and to estimate the trajectory of intraoperative microelectrode recording (MER) on magnetic resonance (MR) images algorithmically during DBS surgery. Our method is based upon the relationship between the high-frequency band (HFB; 500-2000 Hz) signal from MER and voxel intensity on MR images. The HFB profile along an MER trajectory recorded during surgery is compared to voxel intensity profiles along many potential trajectories in the region of the surgically planned trajectory. From these comparisons of HFB recordings and potential trajectories, an estimate of the MER trajectory is calculated. This calculated trajectory is then compared to actual trajectory, as estimated by postoperative high-resolution computed tomography. We compared 20 planned, calculated, and actual trajectories in 13 patients who underwent STN DBS surgery. Targeting errors for our calculated trajectories (2.33 mm ± 0.2 mm) were significantly less than errors for surgically planned trajectories (2.83 mm ± 0.2 mm; P = .01), improving targeting prediction in 70% of individual cases (14/20). Moreover, in 4 of 4 initial MER trajectories that missed the STN, our method correctly indicated the required direction of targeting adjustment for the DBS lead to intersect the STN. A computer-based algorithm simultaneously utilizing MER and MR information potentially eases electrode localization during STN DBS surgery.
Nelson, Richard E; Stevens, Vanessa W; Khader, Karim; Jones, Makoto; Samore, Matthew H; Evans, Martin E; Douglas Scott, R; Slayton, Rachel B; Schweizer, Marin L; Perencevich, Eli L; Rubin, Michael A
2016-05-01
In an effort to reduce methicillin-resistant Staphylococcus aureus (MRSA) transmission through universal screening and isolation, the Department of Veterans Affairs (VA) launched the National MRSA Prevention Initiative in October 2007. The objective of this analysis was to quantify the budget impact and cost effectiveness of this initiative. An economic model was developed using published data on MRSA hospital-acquired infection (HAI) rates in the VA from October 2007 to September 2010; estimates of the costs of MRSA HAIs in the VA; and estimates of the intervention costs, including salaries of staff members hired to support the initiative at each VA facility. To estimate the rate of MRSA HAIs that would have occurred if the initiative had not been implemented, two different assumptions were made: no change and a downward temporal trend. Effectiveness was measured in life-years gained. The initiative resulted in an estimated 1,466-2,176 fewer MRSA HAIs. The initiative itself was estimated to cost $207 million during this 3-year period, while the cost savings from prevented MRSA HAIs ranged from $27 million to $75 million. The incremental cost-effectiveness ratios ranged from $28,048 to $56,944/life-years. The overall impact on the VA's budget was $131-$179 million. Wide-scale implementation of a national MRSA surveillance and prevention strategy in VA inpatient settings may have prevented a substantial number of MRSA HAIs. Although the savings associated with prevented infections helped offset some but not all of the cost of the initiative, this model indicated that the initiative would be considered cost effective. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems
NASA Astrophysics Data System (ADS)
Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.
2016-12-01
Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
DOE R&D Accomplishments Database
Wilczek, Frank; Turner, Michael S.
1990-09-01
If Peccei-Quinn (PQ) symmetry is broken after inflation, the initial axion angle is a random variable on cosmological scales; based on this fact, estimates of the relic-axion mass density give too large a value if the axion mass is less than about 10-6 eV. This bound can be evaded if the Universe underwent inflation after PQ symmetry breaking and if the observable Universe happens to be a region where the initial axion angle was atypically small, .1 . (ma/10-6eV)0.59. We show consideration of fluctuations induced during inflation severely constrains the latter alternative.
A correlation to estimate the velocity of convective currents in boilover.
Ferrero, Fabio; Kozanoglu, Bulent; Arnaldos, Josep
2007-05-08
The mathematical model proposed by Kozanoglu et al. [B. Kozanoglu, F. Ferrero, M. Muñoz, J. Arnaldos, J. Casal, Velocity of the convective currents in boilover, Chem. Eng. Sci. 61 (8) (2006) 2550-2556] for simulating heat transfer in hydrocarbon mixtures in the process that leads to boilover requires the initial value of the convective current's velocity through the fuel layer as an adjustable parameter. Here, a correlation for predicting this parameter based on the properties of the fuel (average ebullition temperature) and the initial thickness of the fuel layer is proposed.
NASA Technical Reports Server (NTRS)
Helder, Dennis; Thome, Kurtis John; Aaron, Dave; Leigh, Larry; Czapla-Myers, Jeff; Leisso, Nathan; Biggar, Stuart; Anderson, Nik
2012-01-01
A significant problem facing the optical satellite calibration community is limited knowledge of the uncertainties associated with fundamental measurements, such as surface reflectance, used to derive satellite radiometric calibration estimates. In addition, it is difficult to compare the capabilities of calibration teams around the globe, which leads to differences in the estimated calibration of optical satellite sensors. This paper reports on two recent field campaigns that were designed to isolate common uncertainties within and across calibration groups, particularly with respect to ground-based surface reflectance measurements. Initial results from these efforts suggest the uncertainties can be as low as 1.5% to 2.5%. In addition, methods for improving the cross-comparison of calibration teams are suggested that can potentially reduce the differences in the calibration estimates of optical satellite sensors.
NASA Astrophysics Data System (ADS)
Fontaine, G.; Brassard, P.; Dufour, P.; Tremblay, P.-E.
2015-06-01
The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Some important insights into the process may be derived using an approximate approach which combines static stellar models with estimates of diffusion timescales at the base of the outer convection zone or, in its absence, at the photosphere. Until recently, and to our knowledge, values of diffusion timescales in white dwarfs have all been obtained on the basis of the same physics as that developed initially by Paquette et al., including their diffusion coefficients and thermal diffusion coefficients. In view of the recent exciting discoveries of a plethora of metals (including some never seen before) polluting the atmospheres of an increasing number of cool white dwarfs, we felt that a new look at the estimates of settling timescales would be worthwhile. We thus provide improved estimates of diffusion timescales for all 27 elements from Li to Cu in the periodic table in a wide range of the surface gravity-effective temperature domain and for both DA and non-DA stars.
Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja
2016-11-01
Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.
Detection of water vapor on Jupiter
NASA Technical Reports Server (NTRS)
Larson, H. P.; Fink, U.; Treffers, R.; Gautier, T. N., III
1975-01-01
High-altitude (12.4 km) spectroscopic observations of Jupiter at 5 microns from the NASA 91.5 cm airborne infrared telescope have revealed 14 absorptions assigned to the rotation-vibration spectrum of water vapor. Preliminary analysis indicates a mixing ratio about 1 millionth for the vapor phase of water. Estimates of temperature (greater than about 300 K) and pressure (less than 20 atm) suggest observation of water deep in Jupiter's hot spots responsible for its 5 micron flux. Model-atmosphere calculations based on radiative-transfer theory may change these initial estimates and provide a better physical picture of Jupiter's atmosphere below the visible cloud tops.
Preliminary design of a mobile lunar power supply
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.; Kenny, Barbara H.; Fulmer, Christopher R.
1991-01-01
A preliminary design for a Stirling isotope power system for use as a mobile lunar power supply is presented. Performance and mass of the components required for the system are estimated. These estimates are based on power requirements and the operating environment. Optimizations routines are used to determine minimum mass operational points. Shielding for the isotope system are given as a function of the allowed dose, distance from the source, and the time spent near the source. The technologies used in the power conversion and radiator systems are taken from ongoing research in the Civil Space Technology Initiative (CSTI) program.
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
Variation between last-menstrual-period and clinical estimates of gestational age in vital records.
Qin, Cheng; Hsia, Jason; Berg, Cynthia J
2008-03-15
An accurate assessment of gestational age is vital to population-based research and surveillance in maternal and infant health. However, the quality of gestational age measurements derived from birth certificates has been in question. Using the 2002 US public-use natality file, the authors examined the agreement between estimates of gestational age based on the last menstrual period (LMP) and clinical estimates in vital records across durations of gestation and US states and explored reasons for disagreement. Agreement between the LMP and the clinical estimate of gestational age varied substantially across gestations and among states. Preterm births were more likely than term births to have disagreement between the two estimates. Maternal age, maternal education, initiation of prenatal care, order of livebirth, and use of ultrasound had significant independent effects on the disagreement between the two measures, regardless of gestational age, but these factors made little difference in the magnitude of gestational age group differences. Information available on birth certificates was not sufficient to understand this disparity. The lowest agreement between the LMP and the clinical estimate was observed among preterm infants born at 28-36 weeks' gestation, who accounted for more than 90% of total preterm births. This finding deserves particular attention and further investigation.
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.
Testing for handling bias in survival estimation for black brant
Sedinger, J.S.; Lindberg, M.S.; Rexstad, E.A.; Chelgren, N.D.; Ward, D.H.
1997-01-01
We used an ultrastructure approach in program SURVIV to test for, and remove, bias in survival estimates for the year following mass banding of female black brant (Branta bernicla nigricans). We used relative banding-drive size as the independent variable to control for handling effects in our ultrastructure models, which took the form: S = S0(1 - ??D), where ?? was handling effect and D was the ratio of banding-drive size to the largest banding drive. Brant were divided into 3 classes: goslings, initial captures, and recaptures, based on their state at the time of banding, because we anticipated the potential for heterogeneity in model parameters among classes of brant. Among models examined, for which ?? was not constrained, a model with ?? constant across classes of brant and years, constant survival rates among years for initially captured brant but year-specific survival rates for goslings and recaptures, and year- and class-specific detection probabilities had the lowest Akaike Information Criterion (AIC). Handling effect, ??, was -0.47 ?? 0.13 SE, -0.14 ?? 0.057, and -0.12 ?? 0.049 for goslings, initially released adults, and recaptured adults. Gosling annual survival in the first year ranged from 0.738 ?? 0.072 for the 1986 cohort to 0.260 ?? 0.025 for the 1991 cohort. Inclusion of winter observations increased estimates of first-year survival rates by an average of 30%, suggesting that permanent emigration had an important influence on apparent survival, especially for later cohorts. We estimated annual survival for initially captured brant as 0.782 ?? 0.013, while that for recaptures varied from 0.726 ?? 0.034 to 0.900 ?? 0.062. Our analyses failed to detect a negative effect of handling on survival of brant, which is consistent with an hypothesis of substantial inherent heterogeneity in post-fledging survival rates, such that individuals most likely to die as a result of handling also have lower inherent survival probabilities.
NASA Astrophysics Data System (ADS)
Vulpiani, Gianfranco; Ripepe, Maurizio
2017-04-01
The detection and quantitative retrieval of ash plumes is of significant interest due to the environmental, climatic, and socioeconomic effects of ash fallout which might cause hardship and damages in areas surrounding volcanoes, representing a serious hazard to aircrafts. Real-time monitoring of such phenomena is crucial for initializing ash dispersion models. Ground-based and space-borne remote sensing observations provide essential information for scientific and operational applications. Satellite visible-infrared radiometric observations from geostationary platforms are usually exploited for long-range trajectory tracking and for measuring low-level eruptions. Their imagery is available every 10-30 min and suffers from a relatively poor spatial resolution. Moreover, the field of view of geostationary radiometric measurements may be blocked by water and ice clouds at higher levels and the observations' overall utility is reduced at night. Ground-based microwave weather radars may represent an important tool for detecting and, to a certain extent, mitigating the hazards presented by ash clouds. The possibility of monitoring in all weather conditions at a fairly high spatial resolution (less than a few hundred meters) and every few minutes after the eruption is the major advantage of using ground-based microwave radar systems. Ground-based weather radar systems can also provide data for estimating the ash volume, total mass, and height of eruption clouds. Previous methodological studies have investigated the possibility of using ground-based single- and dual-polarization radar system for the remote sensing of volcanic ash cloud. In the present work, methodology was revised to overcome some limitations related to the assumed microphysics. New scattering simulations based on the T-matrix solution technique were used to set up the parametric algorithms adopted to estimate the mass concentration and ash mean diameter. Furthermore, because quantitative estimation of the erupted materials in the proximity of the volcano's vent is crucial for initializing transportation models, a novel methodology for estimating a volcano eruption's mass discharge rate based on the combination of radar and a thermal camera was developed. We show how it is possible to calculate the mass flow using radar-derived ash concentration and particle diameter at the base of the eruption column using the exit velocity estimated by the thermal camera. The proposed procedure was tested on four Etna eruption episodes that occurred in December 2015 as observed by the available network of C and X band radar systems. The results are congruent with other independent methodologies and observations . The agreement between the total erupted mass derived by the retrieved MDR and the plume concentration can be considered as a self-consistent methodological assessment. Interestingly, the analysis of the polarimetric radar observations allowed us to derive some features of the ash plume, including the size of the eruption column and the height of the gas thrust region.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2012 CFR
2012-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2013 CFR
2013-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2014 CFR
2014-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Time series sightability modeling of animal populations.
ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R
2018-01-01
Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
Time series sightability modeling of animal populations
ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.
2018-01-01
Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
Breast-feeding patterns, time to initiation, and mortality risk among newborns in southern Nepal.
Mullany, Luke C; Katz, Joanne; Li, Yue M; Khatry, Subarna K; LeClerq, Steven C; Darmstadt, Gary L; Tielsch, James M
2008-03-01
Initiation of breast-feeding within 1 h after birth has been associated with reduced neonatal mortality in a rural Ghanaian population. In South Asia, however, breast-feeding patterns and low birth weight rates differ and this relationship has not been quantified. Data were collected during a community-based randomized trial of the impact of topical chlorhexidine antisepsis interventions on neonatal mortality and morbidity in southern Nepal. In-home visits were conducted on d 1-4, 6, 8, 10, 12, 14, 21, and 28 to collect longitudinal information on timing of initiation and pattern of breast-feeding. Multivariable regression modeling was used to estimate the association between death and breast-feeding initiation time. Analysis was based on 22,838 breast-fed newborns surviving to 48 h. Within 1 h of birth, 3.4% of infants were breast-fed and 56.6% were breast-fed within 24 h of birth. Partially breast-fed infants (72.6%) were at higher mortality risk [relative risk (RR) = 1.77; 95% CI = 1.32-2.39] than those exclusively breast-fed. There was a trend (P = 0.03) toward higher mortality with increasing delay in breast-feeding initiation. Mortality was higher among late (> or = 24 h) compared with early (< 24 h) initiators (RR = 1.41; 95% CI = 1.08-1.86) after adjustment for low birth weight, preterm birth, and other covariates. Improvements in breast-feeding practices in this setting may reduce neonatal mortality substantially. Approximately 7.7 and 19.1% of all neonatal deaths may be avoided with universal initiation of breast-feeding within the first day or hour of life, respectively. Community-based breast-feeding promotion programs should remain a priority, with renewed emphasis on early initiation in addition to exclusiveness and duration of breast-feeding.
Shell Buckling Design Criteria Based on Manufacturing Imperfection Signatures
NASA Technical Reports Server (NTRS)
Hilburger, Mark W.; Nemeth, Michael P.; Starnes, James H., Jr.
2004-01-01
An analysis-based approach .for developing shell-buckling design criteria for laminated-composite cylindrical shells that accurately accounts for the effects of initial geometric imperfections is presented. With this approach, measured initial geometric imperfection data from six graphite-epoxy shells are used to determine a manufacturing-process-specific imperfection signature for these shells. This imperfection signature is then used as input into nonlinear finite-element analyses. The imperfection signature represents a "first-approximation" mean imperfection shape that is suitable for developing preliminary-design data. Comparisons of test data and analytical results obtained by using several different imperfection shapes are presented for selected shells. Overall, the results indicate that the analysis-based approach presented for developing reliable preliminary-design criteria has the potential to provide improved, less conservative buckling-load estimates, and to reduce the weight and cost of developing buckling-resistant shell structures.
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira
2015-09-17
In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.
Tuning a physically-based model of the air-sea gas transfer velocity
NASA Astrophysics Data System (ADS)
Jeffery, C. D.; Robinson, I. S.; Woolf, D. K.
Air-sea gas transfer velocities are estimated for one year using a 1-D upper-ocean model (GOTM) and a modified version of the NOAA-COARE transfer velocity parameterization. Tuning parameters are evaluated with the aim of bringing the physically based NOAA-COARE parameterization in line with current estimates, based on simple wind-speed dependent models derived from bomb-radiocarbon inventories and deliberate tracer release experiments. We suggest that A = 1.3 and B = 1.0, for the sub-layer scaling parameter and the bubble mediated exchange, respectively, are consistent with the global average CO 2 transfer velocity k. Using these parameters and a simple 2nd order polynomial approximation, with respect to wind speed, we estimate a global annual average k for CO 2 of 16.4 ± 5.6 cm h -1 when using global mean winds of 6.89 m s -1 from the NCEP/NCAR Reanalysis 1 1954-2000. The tuned model can be used to predict the transfer velocity of any gas, with appropriate treatment of the dependence on molecular properties including the strong solubility dependence of bubble-mediated transfer. For example, an initial estimate of the global average transfer velocity of DMS (a relatively soluble gas) is only 11.9 cm h -1 whilst for less soluble methane the estimate is 18.0 cm h -1.
Code of Federal Regulations, 2011 CFR
2011-04-01
... appropriate to the nature and phase of the work and sufficient to allow comparisons to the Indian tribe or... changes such as labor, material, and transportation costs. (c) The Secretary shall provide the initial... estimates based on changed or additional information such as the following: (1) Actual subcontract bids; (2...
Community-based agroforestry initiatives in Nicaragua and Costa Rica
David I. King; Richard B. Chandler; John H. Rappole; Raul Raudales; Rich. Turbey
2012-01-01
Curbing the loss of biodiversity is a primary challenge to conservationists. Estimates of current rates of species loss range from 14,000 - 40,000 species per year (Hughes et al., 2007), and although a variety of factors are implicated, habitat loss is repeatedly cited as an important cause (Sala et al., 2000). Most ecosystems are under some degree of threat, however...
Fitch, Kevin F; Doyle, James F
2005-09-01
In Elmhurst Memorial Healthcare's capital planning method: Future replacement costs of assets are estimated by inflating their historical cost over their lives. A balanced model is created initially based on the assumption that rates of revenue growth, inflation, investment income, and interest expense are all equal. Numbers then can be adjusted to account for possible variations, such as excesses or shortages in investment or debt balances.
Holtschlag, David J.
2009-01-01
Two-dimensional hydrodynamic and transport models were applied to a 34-mile reach of the Ohio River from Cincinnati, Ohio, upstream to Meldahl Dam near Neville, Ohio. The hydrodynamic model was based on the generalized finite-element hydrodynamic code RMA2 to simulate depth-averaged velocities and flow depths. The generalized water-quality transport code RMA4 was applied to simulate the transport of vertically mixed, water-soluble constituents that have a density similar to that of water. Boundary conditions for hydrodynamic simulations included water levels at the U.S. Geological Survey water-level gaging station near Cincinnati, Ohio, and flow estimates based on a gate rating at Meldahl Dam. Flows estimated on the basis of the gate rating were adjusted with limited flow-measurement data to more nearly reflect current conditions. An initial calibration of the hydrodynamic model was based on data from acoustic Doppler current profiler surveys and water-level information. These data provided flows, horizontal water velocities, water levels, and flow depths needed to estimate hydrodynamic parameters related to channel resistance to flow and eddy viscosity. Similarly, dye concentration measurements from two dye-injection sites on each side of the river were used to develop initial estimates of transport parameters describing mixing and dye-decay characteristics needed for the transport model. A nonlinear regression-based approach was used to estimate parameters in the hydrodynamic and transport models. Parameters describing channel resistance to flow (Manning’s “n”) were estimated in areas of deep and shallow flows as 0.0234, and 0.0275, respectively. The estimated RMA2 Peclet number, which is used to dynamically compute eddy-viscosity coefficients, was 38.3, which is in the range of 15 to 40 that is typically considered appropriate. Resulting hydrodynamic simulations explained 98.8 percent of the variability in depth-averaged flows, 90.0 percent of the variability in water levels, 93.5 percent of the variability in flow depths, and 92.5 percent of the variability in velocities. Estimates of the water-quality-transport-model parameters describing turbulent mixing characteristics converged to different values for the two dye-injection reaches. For the Big Indian Creek dye-injection study, an RMA4 Peclet number of 37.2 was estimated, which was within the recommended range of 15 to 40, and similar to the RMA2 Peclet number. The estimated dye-decay coefficient was 0.323. Simulated dye concentrations explained 90.2 percent of the variations in measured dye concentrations for the Big Indian Creek injection study. For the dye-injection reach starting downstream from Twelvemile Creek, however, an RMA4 Peclet number of 173 was estimated, which is far outside the recommended range. Simulated dye concentrations were similar to measured concentration distributions at the first four transects downstream from the dye-injection site that were considered vertically mixed. Farther downstream, however, simulated concentrations did not match the attenuation of maximum concentrations or cross-channel transport of dye that were measured. The difficulty of determining a consistent RMA4 Peclet was related to the two-dimension model assumption that velocity distributions are closely approximated by their depth-averaged values. Analysis of velocity data showed significant variations in velocity direction with depth in channel reaches with curvature. Channel irregularities (including curvatures, depth irregularities, and shoreline variations) apparently produce transverse currents that affect the distribution of constituents, but are not fully accounted for in a two-dimensional model. The two-dimensional flow model, using channel resistance to flow parameters of 0.0234 and 0.0275 for deep and shallow areas, respectively, and an RMA2 Peclet number of 38.3, and the RMA4 transport model with a Peclet number of 37.2, may have utility for emergency-planning purposes. Emergency-response efforts would be enhanced by continuous streamgaging records downstream from Meldahl Dam, real-time water-quality monitoring, and three-dimensional modeling. Decay coefficients are constituent specific.
Spatial resolution in visual memory.
Ben-Shalom, Asaf; Ganel, Tzvi
2015-04-01
Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.
NASA Astrophysics Data System (ADS)
Becker-Reshef, I.; Justice, C. O.; Vermote, E.
2012-12-01
Up to date, reliable, global, information on crop production prospects is indispensible for informing and regulating grain markets and for instituting effective agricultural policies. The recent price surges in the global grain markets were in large part triggered by extreme weather events in primary grain export countries. These events raise important questions about the accuracy of current production forecasts and their role in market fluctuations, and highlight the deficiencies in the state of global agricultural monitoring. Satellite-based earth observations are increasingly utilized as a tool for monitoring agricultural production as they offer cost-effective, daily, global information on crop growth and extent and their utility for crop production forecasting has long been demonstrated. Within this context, the Group on Earth Observations developed the Global Agricultural Monitoring (GEOGLAM) initiative which was adopted by the G20 as part of the action plan on food price volatility and agriculture. The goal of GEOGLAM is to enhance agricultural production estimates through the use of Earth observations. This talk will explore the potential contribution of EO-based methods for improving the accuracy of early production estimates of main export countries within the framework of GEOGLAM.
Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J
2016-02-01
A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.
Inferring the source of evaporated waters using stable H and O isotopes
NASA Astrophysics Data System (ADS)
Bowen, G. J.; Putman, A.; Brooks, J. R.; Bowling, D. R.; Oerter, E.; Good, S. P.
2017-12-01
Stable isotope ratios of H and O are widely used identify the source of water, e.g., in aquifers, river runoff, soils, plant xylem, and plant-based beverages. In situations where the sampled water is partially evaporated, its isotope values will have evolved along an evaporation line (EL) in δ2H/δ18O space, and back-correction along the EL to its intersection with a meteoric water line (MWL) has been used to estimate the source water's isotope ratios. Several challenges and potential pitfalls exist with traditional approaches to this problem, including potential for bias from a commonly used regression-based approach for EL slope estimation and incomplete estimation of uncertainty in most studies. We suggest the value of a model-based approach to EL estimation, and introduce a mathematical framework that eliminates the need to explicitly estimate the EL-MWL intersection, simplifying analysis and facilitating more rigorous uncertainty estimation. We apply this analysis framework to data from 1,000 lakes sampled in EPA's 2007 National Lakes Assessment. We find that data for most lakes is consistent with a water source similar to annual runoff, estimated from monthly precipitation and evaporation within the lake basin. Strong evidence for both summer- and winter-biased sources exists, however, with winter bias pervasive in most snow-prone regions. The new analytical framework should improve the rigor of source-water inference from evaporated samples in ecohydrology and related sciences, and our initial results from U.S. lakes suggest that previous interpretations of lakes as unbiased isotope integrators may only be valid in certain climate regimes.
NASA Astrophysics Data System (ADS)
Rödenbeck, Christian; Bakker, Dorothee; Gruber, Nicolas; Iida, Yosuke; Jacobson, Andy; Jones, Steve; Landschützer, Peter; Metzl, Nicolas; Nakaoka, Shin-ichiro; Olsen, Are; Park, Geun-Ha; Peylin, Philippe; Rodgers, Keith; Sasse, Tristan; Schuster, Ute; Shutler, James; Valsala, Vinu; Wanninkhof, Rik; Zeng, Jiye
2016-04-01
Using measurements of the surface-ocean COtwo partial pressure (pCOtwo) from the SOCAT and LDEO data bases and 14 different pCOtwo mapping methods recently collated by the Surface Ocean pCOtwo Mapping intercomparison (SOCOM) initiative, variations in regional and global sea-air COtwo fluxes are investigated. Though the available mapping methods use widely different approaches, we find relatively consistent estimates of regional pCOtwo seasonality, in line with previous estimates. In terms of interannual variability (IAV), all mapping methods estimate the largest variations to occur in the Eastern equatorial Pacific. Despite considerable spread in the detailed variations, mapping methods that fit the data more closely also tend to agree more closely with each other in regional averages. Encouragingly, this includes mapping methods belonging to complementary types - taking variability either directly from the pCOtwo data or indirectly from driver data via regression. From a weighted ensemble average, we find an IAV amplitude of the global sea-air COtwo flux of IAVampl (standard deviation over AnalysisPeriod), which is larger than simulated by biogeochemical process models. On a decadal perspective, the global ocean COtwo uptake is estimated to have gradually increased since about 2000, with little decadal change prior to that. The weighted mean net global ocean COtwo sink estimated by the SOCOM ensemble is -1.75 UPgCyr (AnalysisPeriod), consistent within uncertainties with estimates from ocean-interior carbon data or atmospheric oxygen trends. Using data-based sea-air COtwo fluxes in atmospheric COtwo inversions also helps to better constrain land-atmosphere COtwo fluxes.
Estimating the cost of epilepsy in Europe: a review with economic modeling.
Pugliatti, Maura; Beghi, Ettore; Forsgren, Lars; Ekman, Mattias; Sobocki, Patrik
2007-12-01
Based on available epidemiologic, health economic, and international population statistics literature, the cost of epilepsy in Europe was estimated. Europe was defined as the 25 European Union member countries, Iceland, Norway, and Switzerland. Guidelines for epidemiological studies on epilepsy were used for a case definition. A bottom-up prevalence-based cost-of-illness approach, the societal perspective for including the cost items, and the human capital approach as valuation principle for indirect costs were used. The cost estimates were based on selected studies with common methodology and valuation principles. The estimated prevalence of epilepsy in Europe in 2004 was 4.3-7.8 per 1,000. The estimated total cost of the disease in Europe was euro15.5 billion in 2004, indirect cost being the single most dominant cost category (euro8.6 billion). Direct health care costs were euro2.8 billion, outpatient care comprising the largest part (euro1.3 billion). Direct nonmedical cost was euro4.2 billion. That of antiepileptic drugs was euro400 million. The total cost per case was euro2,000-11,500 and the estimated cost per European inhabitant was euro33. Epilepsy is a relevant socioeconomic burden at individual, family, health services, and societal level in Europe. The greater proportion of such burden is outside the formal health care sector, antiepileptic drugs representing a smaller proportion. Lack of economic data from several European countries and other methodological limitations make this report an initial estimate of the cost of epilepsy in Europe. Prospective incidence cost-of-illness studies from well-defined populations and common methodology are encouraged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geiger, J.; Lisell, L.; Mosey, G.
2013-10-01
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative through the Region 6 contract, selected Ft. Hood Army Base in Killeen, Texas, for a feasibility study of renewable energy production. The National Renewable Energy Laboratory (NREL) provided technical assistance for this project. The purpose of this study is to assess the site for possible photovoltaic (PV) system installations and estimate the cost, performance, and site impacts of different PV options. In addition, the report recommends financing options that could assist in the implementation of a PV system at the site.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Stacey, Paul E.; Greening, Holly; Kremer, James N.; Peterson, David; Tomasko, David A.; Valigura, Richard A.; Alexander, Richard B.; Castro, Mark S.; Meyers, Tilden P.; Paerl, Hans W.; Stacey, Paul E.; Turner, R. Eugene
2001-01-01
A NOAA project was initiated in 1998, with support from the U.S. EPA, to develop state-of-the-art estimates of atmospheric N deposition to estuarine watersheds and water surfaces and its delivery to the estuaries. Work groups were formed to address N deposition rates, indirect (from the watershed) yields from atmospheric and other anthropogenic sources, and direct deposition on the estuarine waterbodies, and to evaluate the levels of uncertainty within the estimates. Watershed N yields were estimated using both a land-use based process approach and a national (SPARROW) model, compared to each other, and compared to estimates of N yield from the literature. The total N yields predicted by the national model were similar to values found in the literature and the land-use derived estimates were consistently higher. Atmospheric N yield estimates were within a similar range for the two approaches, but tended to be higher in the land-use based estimates and were not wellcorrelated. Median atmospheric N yields were around 15% of the total N yield for both groups, but ranged as high as 60% when both direct and indirect deposition were considered. Although not the dominant source of anthropogenic N, atmospheric N is, and will undoubtedly continue to be, an important factor in culturally eutrophied estuarine systems, warranting additional research and management attention.
McClellan, Sean R; Panattoni, Laura; Chan, Albert S; Tai-Seale, Ming
2016-03-01
Few studies have examined the association between patient-initiated electronic messaging (e-messaging) and clinical outcomes in fee-for-service settings. To estimate the association between patient-initiated e-messages and quality of care among patients with diabetes and hypertension. Longitudinal observational study from 2009 to 2013. In March 2011, the medical group eliminated a $60/year patient user fee for e-messaging and established a provider payment of $3-5 per patient-initiated e-message. Quality of care for patients initiating e-messages was compared before and after March 2011, relative to nonmessaging patients. Propensity score weighting accounted for differences between e-messaging and nonmessaging patients in generalized estimating equations. Large multispecialty practice in California compensating providers' fee-for-service. Patients with diabetes (N=4232) or hypertension (N=15,463) who had activated their online portal but not e-messaged before e-messaging became free. Quality of care included HEDIS-based process measures for hemoglobin (Hb) A1c, blood pressure, low-density lipoprotein (LDL), nephropathy, and retinopathy tests, and outcome measures for HbA1c, blood pressure, and LDL. E-messaging was measured as counts of patient-initiated e-message threads sent to providers. Patients were categorized into quartiles by e-messaging frequency. The probability of annually completing indicated tests increased by 1%-7% for e-messaging patients, depending on the outcome and e-messaging frequency. E-messaging was associated with small improvements in HbA1c and LDL for some patients with diabetes. Patient-initiated e-messaging may increase the likelihood of completing recommended tests, but may not be sufficient to improve clinical outcomes for most patients with diabetes or hypertension without additional interventions.
Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.
Chen, Jing; Zhang, Yi; Xue, Wei
2018-04-28
In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.
Chen, Xinjian; Udupa, Jayaram K.; Alavi, Abass; Torigian, Drew A.
2013-01-01
Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) method with the model based ASM method to arrive at the GC-ASM method for medical image segmentation. A multi-object GC cost function is proposed which effectively integrates the ASM shape information into the GC framework. The proposed method consists of two phases: model building and segmentation. In the model building phase, the ASM model is built and the parameters of the GC are estimated. The segmentation phase consists of two main steps: initialization (recognition) and delineation. For initialization, an automatic method is proposed which estimates the pose (translation, orientation, and scale) of the model, and obtains a rough segmentation result which also provides the shape information for the GC method. For delineation, an iterative GC-ASM algorithm is proposed which performs finer delineation based on the initialization results. The proposed methods are implemented to operate on 2D images and evaluated on clinical chest CT, abdominal CT, and foot MRI data sets. The results show the following: (a) An overall delineation accuracy of TPVF > 96%, FPVF < 0.6% can be achieved via GC-ASM for different objects, modalities, and body regions. (b) GC-ASM improves over ASM in its accuracy and precision to search region. (c) GC-ASM requires far fewer landmarks (about 1/3 of ASM) than ASM. (d) GC-ASM achieves full automation in the segmentation step compared to GC which requires seed specification and improves on the accuracy of GC. (e) One disadvantage of GC-ASM is its increased computational expense owing to the iterative nature of the algorithm. PMID:23585712
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2013-05-01
Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) method with the model based ASM method to arrive at the GC-ASM method for medical image segmentation. A multi-object GC cost function is proposed which effectively integrates the ASM shape information into the GC framework. The proposed method consists of two phases: model building and segmentation. In the model building phase, the ASM model is built and the parameters of the GC are estimated. The segmentation phase consists of two main steps: initialization (recognition) and delineation. For initialization, an automatic method is proposed which estimates the pose (translation, orientation, and scale) of the model, and obtains a rough segmentation result which also provides the shape information for the GC method. For delineation, an iterative GC-ASM algorithm is proposed which performs finer delineation based on the initialization results. The proposed methods are implemented to operate on 2D images and evaluated on clinical chest CT, abdominal CT, and foot MRI data sets. The results show the following: (a) An overall delineation accuracy of TPVF > 96%, FPVF < 0.6% can be achieved via GC-ASM for different objects, modalities, and body regions. (b) GC-ASM improves over ASM in its accuracy and precision to search region. (c) GC-ASM requires far fewer landmarks (about 1/3 of ASM) than ASM. (d) GC-ASM achieves full automation in the segmentation step compared to GC which requires seed specification and improves on the accuracy of GC. (e) One disadvantage of GC-ASM is its increased computational expense owing to the iterative nature of the algorithm.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
NASA Astrophysics Data System (ADS)
Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena
2017-12-01
Original simultaneous records of currents, close electric field, and high-speed videos of natural negative cloud-to-ground lightning striking the tower of Morro do Cachimbo Station are used to reveal typical features of upward positive leaders before the attachment, including their initiation and mode of propagation. According to the results, upward positive leaders initiate some hundreds of microseconds prior to the return stroke, while a continuous uprising current of about 4 A and superimposed pulses of a few tens amperes flow along the tower. Upon leader initiation, the electric field measured 50 m away from the tower at ground level is about 60 kV/m. The corresponding average field roughly estimated 0.5 m above the tower top is higher than 0.55 MV/m. As in laboratory experiments, the common propagation mode of upward positive leaders is developing continuously, without steps, from their initiation. Unlike downward negative leaders, upward positive leaders typically do not branch off, though they can bifurcate under the effect of a downward negative leader's secondary branch approaching their lateral surface. The upward positive leader's estimated average two-dimensional propagation speed, in the range of 0.06 × 106 to 0.16 × 106 m/s, has the same order of magnitude as that of downward negative leaders. Apparently, the speed tends to increase just before attachment.
Mortality along the continuum of HIV care in Rwanda: a model-based analysis.
Bendavid, Eran; Stauffer, David; Remera, Eric; Nsanzimana, Sabin; Kanters, Steve; Mills, Edward J
2016-12-01
HIV is the leading cause of death among adults in sub-Saharan Africa. However, mortality along the HIV care continuum is poorly described. We combine demographic, epidemiologic, and health services data to estimate where are people with HIV dying along Rwanda's care continuum. We calibrated an age-structured HIV disease and transmission stochastic simulation model to the epidemic in Rwanda. We estimate mortality among HIV-infected individuals in the following states: untested, tested without establishing care in an antiretroviral therapy (ART) program (unlinked), in care before initiating ART (pre-ART), lost to follow-up (LTFU) following ART initiation, and retained in active ART care. We estimated mortality among people living with HIV in Rwanda through 2025 under current conditions, and with improvements to the HIV care continuum. In 2014, the greatest portion of deaths occurred among those untested (35.4%), followed by those on ART (34.1%), reflecting the large increase in the population on ART. Deaths among those LTFU made up 11.8% of all deaths among HIV-infected individuals in 2014, and in the base case this portion increased to 18.8% in 2025, while the contribution to mortality declined among those untested, unlinked, and in pre-ART. In our model only combined improvements to multiple aspects of the HIV care continuum were projected to reduce the total number of deaths among those with HIV, estimated at 8177 in 2014, rising to 10,659 in the base case, and declining to 5,691 with combined improvements in 2025. Mortality among those untested for HIV contributes a declining portion of deaths among HIV-infected individuals in Rwanda, but the portion of deaths among those LTFU is expected to increase the most over the next decade. Combined improvements to the HIV care continuum might be needed to reduce the number of deaths among those with HIV.
NASA Astrophysics Data System (ADS)
Khwaja, Tariq S.; Mazhar, Mohsin Ali; Niazi, Haris Khan; Reza, Syed Azer
2017-06-01
In this paper, we present the design of a proposed optical rangefinder to determine the distance of a semi-reflective target from the sensor module. The sensor module deploys a simple Tunable Focus Lens (TFL), a Laser Source (LS) with a Gaussian Beam profile and a digital beam profiler/imager to achieve its desired operation. We show that, owing to the nature of existing measurement methodologies, previous attempts to use a simple TFL in prior art to estimate target distance mostly deliver "one-shot" distance measurement estimates instead of obtaining and using a larger dataset which can significantly reduce the effect of some largely incorrect individual data points on the final distance estimate. Using a measurement dataset and calculating averages also helps smooth out measurement errors in individual data points through effectively low-pass filtering unexpectedly odd measurement offsets in individual data points. In this paper, we show that a simple setup deploying an LS, a TFL and a beam profiler or imager is capable of delivering an entire measurement dataset thus effectively mitigating the effects on measurement accuracy which are associated with "one-shot" measurement techniques. The technique we propose allows a Gaussian Beam from an LS to pass through the TFL. Tuning the focal length of the TFL results in altering the spot size of the beam at the beam imager plane. Recording these different spot radii at the plane of the beam profiler for each unique setting of the TFL provides us with a means to use this measurement dataset to obtain a significantly improved estimate of the target distance as opposed to relying on a single measurement. We show that an iterative least-squares curve-fit on the recorded data allows us to estimate distances of remote objects very precisely. We also show that using some basic ray-optics-based approximations, we also obtain an initial seed value for distance estimate and subsequently use this value to obtain a more precise estimate through an iterative residual reduction in the least-squares sense. In our experiments, we use a MEMS-based Digital Micro-mirror Device (DMD) as a beam imager/profiler as it delivers an accurate estimate of a Gaussian Beam profile. The proposed method, its working and the distance estimation methodology are discussed in detail. For a proof-of-concept, we back our claims with initial experimental results.
Control of AUVs using differential flatness theory and the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Raffo, Guilerme
2015-12-01
The paper proposes nonlinear control and filtering for Autonomous Underwater Vessels (AUVs) based on differential flatness theory and on the use of the Derivative-free nonlinear Kalman Filter. First, it is shown that the 6-DOF dynamic model of the AUV is a differentially flat one. This enables its transformation into the linear canonical (Brunovsky) form and facilitates the design of a state feedback controller. A problem that has to be dealt with is the uncertainty about the parameters of the AUV's dynamic model, as well the external perturbations which affect its motion. To cope with this, it is proposed to use a disturbance observer which is based on the Derivative-free nonlinear Kalman Filter. The considered filtering method consists of the standard Kalman Filter recursion applied on the linearized model of the vessel and of an inverse transformation based on differential flatness theory, which enables to obtain estimates of the state variables of the initial nonlinear model of the vessel. The Kalman Filter-based disturbance observer performs simultaneous estimation of the non-measurable state variables of the AUV and of the perturbation terms that affect its dynamics. By estimating such disturbances, their compensation is also succeeded through suitable modification of the feedback control input. The efficiency of the proposed AUV control and estimation scheme is confirmed through simulation experiments.
Estimating the costs of human space exploration
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1994-01-01
The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.
Chaudhury, Sumona; Arlington, Lauren; Brenan, Shelby; Kairuki, Allan Kaijunga; Meda, Amunga Robson; Isangula, Kahabi G; Mponzi, Victor; Bishanga, Dunstan; Thomas, Erica; Msemo, Georgina; Azayo, Mary; Molinier, Alice; Nelson, Brett D
2016-12-01
Helping Babies Breathe (HBB) has become the gold standard globally for training birth-attendants in neonatal resuscitation in low-resource settings in efforts to reduce early newborn asphyxia and mortality. The purpose of this study was to do a first-ever activity-based cost-analysis of at-scale HBB program implementation and initial follow-up in a large region of Tanzania and evaluate costs of national scale-up as one component of a multi-method external evaluation of the implementation of HBB at scale in Tanzania. We used activity-based costing to examine budget expense data during the two-month implementation and follow-up of HBB in one of the target regions. Activity-cost centers included administrative, initial training (including resuscitation equipment), and follow-up training expenses. Sensitivity analysis was utilized to project cost scenarios incurred to achieve countrywide expansion of the program across all mainland regions of Tanzania and to model costs of program maintenance over one and five years following initiation. Total costs for the Mbeya Region were $202,240, with the highest proportion due to initial training and equipment (45.2%), followed by central program administration (37.2%), and follow-up visits (17.6%). Within Mbeya, 49 training sessions were undertaken, involving the training of 1,341 health providers from 336 health facilities in eight districts. To similarly expand the HBB program across the 25 regions of mainland Tanzania, the total economic cost is projected to be around $4,000,000 (around $600 per facility). Following sensitivity analyses, the estimated total for all Tanzania initial rollout lies between $2,934,793 to $4,309,595. In order to maintain the program nationally under the current model, it is estimated it would cost $2,019,115 for a further one year and $5,640,794 for a further five years of ongoing program support. HBB implementation is a relatively low-cost intervention with potential for high impact on perinatal mortality in resource-poor settings. It is shown here that nationwide expansion of this program across the range of health provision levels and regions of Tanzania would be feasible. This study provides policymakers and investors with the relevant cost-estimation for national rollout of this potentially neonatal life-saving intervention.
Estimating patient time costs associated with colorectal cancer care.
Yabroff, K Robin; Warren, Joan L; Knopf, Kevin; Davis, William W; Brown, Martin L
2005-07-01
Nonmedical costs of care, such as patient time associated with travel to, waiting for, and seeking medical care, are rarely measured systematically with population-based data. The purpose of this study was to estimate patient time costs associated with colorectal cancer care. We identified categories of key medical services for colorectal cancer care and then estimated patient time associated with each service category using data from national surveys. To estimate average service frequencies for each service category, we used a nested case control design and SEER-Medicare data. Estimates were calculated by phase of care for cases and controls, using data from 1995 to 1998. Average service frequencies were then combined with estimates of patient time for each category of service, and the value of patient time assigned. Net patient time costs were calculated for each service category, summarized by phase of care, and compared with previously reported net direct costs of colorectal cancer care. Net patient time costs for the 3 phases of colorectal cancer care averaged dollar 4592 (95% confidence interval [CI] dollar 4427-4757) over the 12 months of the initial phase, dollar 2788 (95% CI dollar 2614-2963) over the 12 months of the terminal phase, and dollar 25 (95% CI: dollar 23-26) per month in the continuing phase of care. Hospitalizations accounted for more than two thirds of these estimates. Patient time costs were 19.3% of direct medical costs in the initial phase, 15.8% in the continuing phase, and 36.8% in the terminal phase of care. Patient time costs are an important component of the costs of colorectal cancer care. Application of this method to other tumor sites and inclusion of other components of the costs of medical care will be important in delineating the economic burden of cancer in the United States.
Loss Estimation Modeling Of Scenario Lahars From Mount Rainier, Washington State, Using HAZUS-MH
NASA Astrophysics Data System (ADS)
Walsh, T. J.; Cakir, R.
2011-12-01
We have adapted lahar hazard zones developed by Hoblitt and others (1998) and converted to digital data by Schilling and others (2008) into the appropriate format for HAZUS-MH, which is FEMA's loss estimation model. We assume that structures engulfed by cohesive lahars will suffer complete loss, and structures affected by post-lahar flooding will be appropriately modeled by the HAZUS-MH flood model. Another approach investigated is to estimate the momentum of lahars, calculate a lateral force, and apply the earthquake model, substituting the lahar lateral force for PGA. Our initial model used the HAZUS default data, which include estimates of building type and value from census data. This model estimated a loss of about 12 billion for a repeat lahar similar to the Electron Mudflow down the Puyallup River. Because HAZUS data are based on census tracts, this estimated damage includes everything in the census tract, even buildings outside of the lahar hazard zone. To correct this, we acquired assessors data from all of the affected counties and converted them into HAZUS format. We then clipped it to the boundaries of the lahar hazard zone to more precisely delineate those properties actually at risk in each scenario. This refined our initial loss estimate to about 6 billion with exclusion of building content values. We are also investigating rebuilding the lahar hazard zones applying Lahar-Z to a more accurate topographic grid derived from recent Lidar data acquired from the Puget Sound Lidar Consortium and Mount Rainier National Park. Final results of these models for the major drainages of Mount Rainier will be posted to the Washington Interactive Geologic Map (http://www.dnr.wa.gov/ResearchScience/Topics/GeosciencesData/Pages/geology_portal.aspx).
Linsell, L; Dawson, J; Zondervan, K; Rose, P; Randall, T; Fitzpatrick, R; Carr, A
2006-02-01
To estimate the national prevalence and incidence of adults consulting for a shoulder condition and to investigate patterns of diagnosis, treatment, consultation and referral 3 yr after initial presentation. Prevalence and incidence rates were estimated for 658469 patients aged 18 and over in the year 2000 using a primary care database, the IMS Disease Analyzer-Mediplus UK. A cohort of 9215 incident cases was followed-up prospectively for 3 yr beyond the initial consultation. The annual prevalence and incidence of people consulting for a shoulder condition was 2.36% [95% confidence interval (CI) 2.32-2.40%] and 1.47% (95% CI 1.44-1.50%), respectively. Prevalence increased linearly with age whilst incidence peaked at around 50 yr then remained static at around 2%. Around half of the incident cases consulted once only, while 13.6% were still consulting with a shoulder problem during the third year of follow-up. During the 3 yr following initial presentation, 22.4% of patients were referred to secondary care, 30.8% were prescribed non-steroidal anti-inflammatory drugs and 10.6% were given an injection by their general practitioner (GP). GPs tended to use a limited number of generalized codes when recording a diagnosis; just five of 426 possible Read codes relating to shoulder conditions accounted for 74.6% of the diagnoses of new cases recorded by GPs. The prevalence of people consulting for shoulder problems in primary care is substantially lower than community-based estimates of shoulder pain. Most referrals occur within 3 months of initial presentation, but only a minority of patients are referred to orthopaedic specialists or rheumatologists. GPs may lack confidence in applying precise diagnoses to shoulder conditions.
Filtering observations without the initial guess
NASA Astrophysics Data System (ADS)
Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.
2017-12-01
Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.
Cerebrospinal fluid neopterin decay characteristics after initiation of antiretroviral therapy
2013-01-01
Background Neopterin, a biomarker of macrophage activation, is elevated in the cerebrospinal fluid (CSF) of most HIV-infected individuals and decreases after initiation of antiretroviral therapy (ART). We studied decay characteristics of neopterin in CSF and blood after commencement of ART in HIV-infected subjects and estimated the set-point levels of CSF neopterin after ART-mediated viral suppression. Methods CSF and blood neopterin were longitudinally measured in 102 neurologically asymptomatic HIV-infected subjects who were treatment-naïve or had been off ART for ≥ 6 months. We used a non-linear model to estimate neopterin decay in response to ART and a stable neopterin set-point attained after prolonged ART. Seven subjects with HIV-associated dementia (HAD) who initiated ART were studied for comparison. Results Non-HAD patients were followed for a median 84.7 months. Though CSF neopterin concentrations decreased rapidly after ART initiation, it was estimated that set-point levels would be below normal CSF neopterin levels (<5.8 nmol/L) in only 60/102 (59%) of these patients. Pre-ART CSF neopterin was the primary predictor of set-point (P <0.001). HAD subjects had higher baseline median CSF neopterin levels than non-HAD subjects (P <0.0001). Based on the non-HAD model, only 14% of HAD patients were predicted to reach normal levels. Conclusions After virologically suppressive ART, abnormal CSF neopterin levels persisted in 41% of non-HAD and the majority of HAD patients. ART is not fully effective in ameliorating macrophage activation in CNS as well as blood, especially in subjects with higher pre-ART levels of immune activation. PMID:23664008
Equation of state for detonation product gases
NASA Astrophysics Data System (ADS)
Nagayama, Kunihito; Kubota, Shiro
2003-03-01
A thermodynamic analysis procedure of the detonation product equation of state (EOS) together with the experimental data set of the detonation velocity as a function of initial density has been formulated. The Chapman-Jouguet (CJ) state [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley 1979)] on the p-ν plane is found to be well approximated by the envelope function formed by the collection of Rayleigh lines with many different initial density states. The Jones-Stanyukovich-Manson relation [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley, 1979)] is used to estimate the error included in this approximation. Based on this analysis, a simplified integration method to calculate the Grüneisen parameter along the CJ state curve with different initial densities utilizing the cylinder expansion data has been presented. The procedure gives a simple way of obtaining the EOS function, compatible with the detonation velocity data. Theoretical analysis has been performed for the precision of the estimated EOS function. EOS of the pentaerithrytoltetranitrate explosive is calculated and compared with some of the experimental data such as CJ pressure data and cylinder expansion data.
NASA Technical Reports Server (NTRS)
Mikic, I.; Krucinski, S.; Thomas, J. D.
1998-01-01
This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.
Tytell, Eric D; Ellington, Charles P
2003-01-01
The vortex wake structure of the hawkmoth, Manduca sexta, was investigated using a vortex ring generator. Based on existing kinematic and morphological data, a piston and tube apparatus was constructed to produce circular vortex rings with the same size and disc loading as a hovering hawkmoth. Results show that the artificial rings were initially laminar, but developed turbulence owing to azimuthal wave instability. The initial impulse and circulation were accurately estimated for laminar rings using particle image velocimetry; after the transition to turbulence, initial circulation was generally underestimated. The underestimate for turbulent rings can be corrected if the transition time and velocity profile are accurately known, but this correction will not be feasible for experiments on real animals. It is therefore crucial that the circulation and impulse be estimated while the wake vortices are still laminar. The scaling of the ring Reynolds number suggests that flying animals of about the size of hawkmoths may be the largest animals whose wakes stay laminar for long enough to perform such measurements during hovering. Thus, at low advance ratios, they may be the largest animals for which wake circulation and impulse can be accurately measured. PMID:14561347
A first look at lightning energy determined from GLM
NASA Astrophysics Data System (ADS)
Bitzer, P. M.; Burchfield, J. C.; Brunner, K. N.
2017-12-01
The Geostationary Lightning Mapper (GLM) was launched in November 2016 onboard GOES-16 has been undergoing post launch and product post launch testing. While these have typically focused on lightning metrics such as detection efficiency, false alarm rate, and location accuracy, there are other attributes of the lightning discharge that are provided by GLM data. Namely, the optical energy radiated by lightning may provide information useful for lightning physics and the relationship of lightning energy to severe weather development. This work presents initial estimates of the lightning optical energy detected by GLM during this initial testing, with a focus on observations during field campaign during spring 2017 in Huntsville. This region is advantageous for the comparison due to the proliferation of ground-based lightning instrumentation, including a lightning mapping array, interferometer, HAMMA (an array of electric field change meters), high speed video cameras, and several long range VLF networks. In addition, the field campaign included airborne observations of the optical emission and electric field changes. The initial estimates will be compared with previous observations using TRMM-LIS. In addition, a comparison between the operational and scientific GLM data sets will also be discussed.
Paul, Repon C.; Rahman, Mahmudur; Gurley, Emily S.; Hossain, M. Jahangir; Diorditsa, Serguei; Hasan, ASM Mainul; Banu, Sultana S.; Alamgir, ASM; Rahman, Muhammad Aziz; Sandhu, Hardeep; Fischer, Marc; Luby, Stephen P.
2011-01-01
Acute meningoencephalitis syndrome surveillance was initiated in three medical college hospitals in Bangladesh in October 2007 to identify Japanese encephalitis (JE) cases. We estimated the population-based incidence of JE in the three hospitals' catchment areas by adjusting the hospital-based crude incidence of JE by the proportion of catchment area meningoencephalitis cases who were admitted to surveillance hospitals. Instead of a traditional house-to-house survey, which is expensive for a disease with low frequency, we attempted a novel approach to identify meningoencephalitis cases in the hospital catchment area through social networks among the community residents. The estimated JE incidence was 2.7/100,000 population in Rajshahi (95% confidence interval [CI] = 1.8–4.9), 1.4 in Khulna (95% CI = 0.9–4.1), and 0.6 in Chittagong (95% CI = 0.4–0.9). Bangladesh should consider a pilot project to introduce JE vaccine in high-incidence areas. PMID:21813862
Matsuzaki, Ryosuke; Tachikawa, Takeshi; Ishizuka, Junya
2018-03-01
Accurate simulations of carbon fiber-reinforced plastic (CFRP) molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult.
Hyper-X Mach 10 Trajectory Reconstruction
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Martin, John G.; Tartabini, Paul V.; Thornblom, Mark N.
2005-01-01
This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X-43A/Hyper-X high speed research vehicle, and its implementation for the reconstruction and analysis of flight test data. Extended Kalman filtering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the filtering routines. Additionally, smoothing algorithms have been implemented in which the final value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from data obtained during the Mach 10 test flight, which occurred on November 16th 2004.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
Shock initiation and detonation properties of bisfluorodinitroethyl formal (FEFO)
NASA Astrophysics Data System (ADS)
Gibson, L. L.; Sheffield, S. A.; Dattelbaum, Dana M.; Stahl, David B.
2012-03-01
FEFO is a liquid explosive with a density of 1.60 g/cm3 and an energy output similar to that of trinitrotoluene (TNT), making it one of the more energetic liquid explosives. Here we describe shock initiation experiments that were conducted using a two-stage gas gun using magnetic gauges to measure the wave profiles during a shock-to-detonation transition. Unreacted Hugoniot data, time-to detonation (overtake) measurements, and reactive wave profiles were obtained from each experiment. FEFO was found to initiate by the homogeneous initiation model, similar to all other liquid explosives we have studied (nitromethane, isopropyl nitrate, hydrogen peroxide). The new unreacted Hugoniot points agree well with other published data. A universal liquid Hugoniot estimation slightly under predicts the measured Hugoniot data. FEFO is very insensitive, with about the same shock sensitivity as the triamino-trinitro-benzene (TATB)-based explosive PBX9502 and cast TNT.
Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.
Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni
2016-01-01
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis
Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel
2012-01-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867
Assessment of Confounders in Comparative Effectiveness Studies From Secondary Databases.
Franklin, Jessica M; Schneeweiss, Sebastian; Solomon, Daniel H
2017-03-15
Secondary clinical databases are an important and growing source of data for comparative effectiveness research (CER) studies. However, measurement of confounders, such as biomarker values or patient-reported health status, in secondary clinical databases may not align with the initiation of a new treatment. In many published CER analyses of registry data, investigators assessed confounders based on the first questionnaire in which the new exposure was recorded. However, it is known that adjustment for confounders measured after the start of exposure can lead to biased treatment effect estimates. In the present study, we conducted simulations to compare assessment strategies for a dynamic clinical confounder in a registry-based comparative effectiveness study of 2 therapies. As expected, we found that adjustment for the confounder value at the time of the first questionnaire after the start of exposure creates a biased estimate the total effect of exposure choice on outcome when the confounder mediates part of the effect. However, adjustment for the prior value can also be badly biased when measured long before exposure initiation. Thus, investigators should carefully consider the timing of confounder measurements relative to exposure initiation and the rate of change in the confounder in order to choose the most relevant measure for each patient. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Cost-benefit analysis of biopsy methods for suspicious mammographic lesions; discussion 994-5.
Fahy, B N; Bold, R J; Schneider, P D; Khatri, V; Goodnight, J E
2001-09-01
Stereotactic core biopsy (SCB) is more cost-effective than needle-localized biopsy (NLB) for evaluation and treatment of mammographic lesions. A computer-generated mathematical model was developed based on clinical outcome modeling to estimate costs accrued during evaluation and treatment of suspicious mammographic lesions. Total costs were determined for evaluation and subsequent treatment of cancer when either SCB or NLB was used as the initial biopsy method. Cost was estimated by the cumulative work relative value units accrued. The risk of malignancy based on the Breast Imaging Reporting Data System (BIRADS) score and mammographic suspicion of ductal carcinoma in situ were varied to simulate common clinical scenarios. Total cost accumulated during evaluation and subsequent surgical therapy (if required). Evaluation of BIRADS 5 lesions (highly suggestive, risk of malignancy = 90%) resulted in equivalent relative value units for both techniques (SCB, 15.54; NLB, 15.47). Evaluation of lesions highly suspicious for ductal carcinoma in situ yielded similar total treatment relative value units (SCB, 11.49; NLB, 10.17). Only for evaluation of BIRADS 4 lesions (suspicious abnormality, risk of malignancy = 34%) was SCB more cost-effective than NLB (SCB, 7.65 vs. NLB, 15.66). No difference in cost-benefit was found when lesions highly suggestive of malignancy (BIRADS 5) or those suspicious for ductal carcinoma in situ were evaluated initially with SCB vs. NLB, thereby disproving the hypothesis. Only for intermediate-risk lesions (BIRADS 4) did initial evaluation with SCB yield a greater cost savings than with NLB.
Observed secondary organic aerosol (SOA) and organic nitrate yields from NO3 oxidation of isoprene
NASA Astrophysics Data System (ADS)
Rollins, A. W.; Fry, J. L.; Kiendler-Scharr, A.; Wooldridge, P. J.; Brown, S. S.; Fuchs, H.; Dube, W.; Mensah, A.; Tillmann, R.; Dorn, H.; Brauers, T.; Cohen, R. C.
2008-12-01
Formation of organic nitrates and secondary organic aerosol (SOA) from the NO3 oxidation of isoprene has been studied at atmospheric concentrations of VOC (10 ppb) and oxidant (<100 ppt NO3) in the presence of ammonium sulfate seed aerosol in the atmosphere simulation chamber SAPHIR at Forschungszentrum Jülich. Cavity Ringdown (CaRDS) and thermal dissociation - CaRDS measurements of NO3 and N2O5 as well as Thermal Dissociation - Laser Induced Fluorescence (TD-LIF) detection of alkyl nitrates (RONO2) and Aerodyne Aerosol Mass Spectrometer (AMS) measurements of aerosol composition were all used in comparison to a Master Chemical Mechanism (MCM) based chemical kinetics box model to quantify the product yields from two stages in isoprene oxidation. We find significant yields of organic nitrate formation from both the initial isoprene + NO3 reaction (71%) as well as from the reaction of NO3 with the initial oxidation products (30% - 60%). Under these low concentration conditions (~1 μg / m3), measured SOA production was greater than instrument noise only for the second oxidation step. Based on the modeled chemistry, we estimate an SOA mass yield of 10% (relative to isoprene mass reacted) for the reaction of the initial oxidation products with NO3. This yield is found to be consistent with the estimated saturation concentration (C*) of the presumed gas products of the doubly oxidized isoprene, where both oxidations lead to the addition of nitrate, carbonyl, and hydroxyl groups.
Lo, Justin C; Allard, Gayatri N; Otton, S Victoria; Campbell, David A; Gobas, Frank A P C
2015-12-01
In vitro bioassays to estimate biotransformation rate constants of contaminants in fish are currently being investigated to improve bioaccumulation assessments of hydrophobic contaminants. The present study investigates the relationship between chemical substrate concentration and in vitro biotransformation rate of 4 environmental contaminants (9-methylanthracene, pyrene, chrysene, and benzo[a]pyrene) in rainbow trout (Oncorhynchus mykiss) liver S9 fractions and methods to determine maximum first-order biotransformation rate constants. Substrate depletion experiments using a series of initial substrate concentrations showed that in vitro biotransformation rates exhibit strong concentration dependence, consistent with a Michaelis-Menten kinetic model. The results indicate that depletion rate constants measured at initial substrate concentrations of 1 μM (a current convention) could underestimate the in vitro biotransformation potential and may cause bioconcentration factors to be overestimated if in vitro biotransformation rates are used to assess bioconcentration factors in fish. Depletion rate constants measured using thin-film sorbent dosing experiments were not statistically different from the maximum depletion rate constants derived using a series of solvent delivery-based depletion experiments for 3 of the 4 test chemicals. Multiple solvent delivery-based depletion experiments at a range of initial concentrations are recommended for determining the concentration dependence of in vitro biotransformation rates in fish liver fractions, whereas a single sorbent phase dosing experiment may be able to provide reasonable approximations of maximum depletion rates of very hydrophobic substances. © 2015 SETAC.
A tiered approach for integrating exposure and dosimetry with ...
High-throughput (HT) risk screening approaches apply in vitro dose-response data to estimate potential health risks that arise from exposure to chemicals. However, much uncertainty is inherent in relating bioactivities observed in an in vitro system to the perturbations of biological mechanisms that lead to apical adverse health outcomes in living organisms. The chemical-agnostic Adverse Outcome Pathway (AOP) framework addresses this uncertainty by acting as a scaffold onto which pathway-based data can be arranged to aid in the understanding of in vitro toxicity testing results. In addition, risk estimation also requires reconciling chemical concentrations sufficient to produce bioactivity in vitro with concentrations that trigger a molecular initiating event (MIE) at the relevant biological target in vivo. Such target site exposures (TSEs) can be estimated using computational models to integrate exposure information with a chemical’s absorption, distribution, metabolism, and elimination (ADME) processes. In this presentation, the utility of a tiered approach for integrating exposure, ADME, and hazard into risk-based decision making will be demonstrated using several case studies, along with the investigation of how uncertainties in exposure and ADME might impact risk estimates. These case studies involve 1) identifying and prioritizing chemicals capable of altering biological pathways based on their potential to reach an in vivo target; 2) evaluating the infl
NASA Astrophysics Data System (ADS)
Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang
2018-01-01
Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.
VizieR Online Data Catalog: SDSS bulge, disk and total stellar mass estimates (Mendel+, 2014)
NASA Astrophysics Data System (ADS)
Mendel, J. T.; Simard, L.; Palmer, M.; Ellison, S. L.; Patton, D. R.
2014-01-01
We present a catalog of bulge, disk, and total stellar mass estimates for ~660000 galaxies in the Legacy area of the Sloan Digital Sky Survey Data (SDSS) Release 7. These masses are based on a homogeneous catalog of g- and r-band photometry described by Simard et al. (2011, Cat. J/ApJS/196/11), which we extend here with bulge+disk and Sersic profile photometric decompositions in the SDSS u, i, and z bands. We discuss the methodology used to derive stellar masses from these data via fitting to broadband spectral energy distributions (SEDs), and show that the typical statistical uncertainty on total, bulge, and disk stellar mass is ~0.15 dex. Despite relatively small formal uncertainties, we argue that SED modeling assumptions, including the choice of synthesis model, extinction law, initial mass function, and details of stellar evolution likely contribute an additional 60% systematic uncertainty in any mass estimate based on broadband SED fitting. We discuss several approaches for identifying genuine bulge+disk systems based on both their statistical likelihood and an analysis of their one-dimensional surface-brightness profiles, and include these metrics in the catalogs. Estimates of the total, bulge and disk stellar masses for both normal and dust-free models and their uncertainties are made publicly available here. (4 data files).
Curvature estimation for multilayer hinged structures with initial strains
NASA Astrophysics Data System (ADS)
Nikishkov, G. P.
2003-10-01
Closed-form estimate of curvature for hinged multilayer structures with initial strains is developed. The finite element method is used for modeling of self-positioning microstructures. The geometrically nonlinear problem with large rotations and large displacements is solved using step procedure with node coordinate update. Finite element results for curvature of the hinged micromirror with variable width is compared to closed-form estimates.
Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.
2011-01-01
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500
Melting in Superheated Silicon Films Under Pulsed-Laser Irradiation
NASA Astrophysics Data System (ADS)
Wang, Jin Jimmy
This thesis examines melting in superheated silicon films in contact with SiO2 under pulsed laser irradiation. An excimer-laser pulse was employed to induce heating of the film by irradiating the film through the transparent fused-quartz substrate such that most of the beam energy was deposited near the bottom Si-SiO2 interface. Melting dynamics were probed via in situ transient reflectance measurements. The temperature profile was estimated computationally by incorporating temperature- and phase-dependent physical parameters and the time-dependent intensity profile of the incident excimer-laser beam obtained from the experiments. The results indicate that a significant degree of superheating occurred in the subsurface region of the film. Surface-initiated melting was observed in spite of the internal heating scheme, which resulted in the film being substantially hotter at and near the bottom Si-SiO2 interface. By considering that the surface melts at the equilibrium melting point, the solid-phase-only heat-flow analysis estimates that the bottom Si-SiO2 interface can be superheated by at least 220 K during excimer-laser irradiation. It was found that at higher laser fluences (i.e., at higher temperatures), melting can be triggered internally. At heating rates of 1010 K/s, melting was observed to initiate at or near the (100)-oriented Si-SiO2 interface at temperatures estimated to be over 300 K above the equilibrium melting point. Based on theoretical considerations, it was deduced that melting in the superheated solid initiated via a nucleation and growth process. Nucleation rates were estimated from the experimental data using Johnson-Mehl-Avrami-Kolmogorov (JMAK) analysis. Interpretation of the results using classical nucleation theory suggests that nucleation of the liquid phase occurred via the heterogeneous mechanism along the Si-SiO2 interface.
Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E
2011-05-31
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.
Improved rapid magnitude estimation for a community-based, low-cost MEMS accelerometer network
Chung, Angela I.; Cochran, Elizabeth S.; Kaiser, Anna E.; Christensen, Carl M.; Yildirim, Battalgazi; Lawrence, Jesse F.
2015-01-01
Immediately following the Mw 7.2 Darfield, New Zealand, earthquake, over 180 Quake‐Catcher Network (QCN) low‐cost micro‐electro‐mechanical systems accelerometers were deployed in the Canterbury region. Using data recorded by this dense network from 2010 to 2013, we significantly improved the QCN rapid magnitude estimation relationship. The previous scaling relationship (Lawrence et al., 2014) did not accurately estimate the magnitudes of nearby (<35 km) events. The new scaling relationship estimates earthquake magnitudes within 1 magnitude unit of the GNS Science GeoNet earthquake catalog magnitudes for 99% of the events tested, within 0.5 magnitude units for 90% of the events, and within 0.25 magnitude units for 57% of the events. These magnitudes are reliably estimated within 3 s of the initial trigger recorded on at least seven stations. In this report, we present the methods used to calculate a new scaling relationship and demonstrate the accuracy of the revised magnitude estimates using a program that is able to retrospectively estimate event magnitudes using archived data.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
Bauermeister, José A; Zimmerman, Marc A; Johns, Michelle M; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik
2012-09-01
We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18-24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods.
Lodi, Sara; Phillips, Andrew; Logan, Roger; Olson, Ashley; Costagliola, Dominique; Abgrall, Sophie; van Sighem, Ard; Reiss, Peter; Miró, José M; Ferrer, Elena; Justice, Amy; Gandhi, Neel; Bucher, Heiner C; Furrer, Hansjakob; Moreno, Santiago; Monge, Susana; Touloumi, Giota; Pantazis, Nikos; Sterne, Jonathan; Young, Jessica G; Meyer, Laurence; Seng, Rémonie; Dabis, Francois; Vandehende, Marie-Anne; Pérez-Hoyos, Santiago; Jarrín, Inma; Jose, Sophie; Sabin, Caroline; Hernán, Miguel A
2015-08-01
Recommendations have differed nationally and internationally with respect to the best time to start antiretroviral therapy (ART). We compared effectiveness of three strategies for initiation of ART in high-income countries for HIV-positive individuals who do not have AIDS: immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL. We used data from the HIV-CAUSAL Collaboration of cohort studies in Europe and the USA. We included 55,826 individuals aged 18 years or older who were diagnosed with HIV-1 infection between January, 2000, and September, 2013, had not started ART, did not have AIDS, and had CD4 count and HIV-RNA viral load measurements within 6 months of HIV diagnosis. We estimated relative risks of death and of death or AIDS-defining illness, mean survival time, the proportion of individuals in need of ART, and the proportion of individuals with HIV-RNA viral load less than 50 copies per mL, as would have been recorded under each ART initiation strategy after 7 years of HIV diagnosis. We used the parametric g-formula to adjust for baseline and time-varying confounders. Median CD4 count at diagnosis of HIV infection was 376 cells per μL (IQR 222-551). Compared with immediate initiation, the estimated relative risk of death was 1·02 (95% CI 1·01-1·02) when ART was started at a CD4 count less than 500 cells per μL, and 1·06 (1·04-1·08) with initiation at a CD4 count less than 350 cells per μL. Corresponding estimates for death or AIDS-defining illness were 1·06 (1·06-1·07) and 1·20 (1·17-1·23), respectively. Compared with immediate initiation, the mean survival time at 7 years with a strategy of initiation at a CD4 count less than 500 cells per μL was 2 days shorter (95% CI 1-2) and at a CD4 count less than 350 cells per μL was 5 days shorter (4-6). 7 years after diagnosis of HIV, 100%, 98·7% (95% CI 98·6-98·7), and 92·6% (92·2-92·9) of individuals would have been in need of ART with immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL, respectively. Corresponding proportions of individuals with HIV-RNA viral load less than 50 copies per mL at 7 years were 87·3% (87·3-88·6), 87·4% (87·4-88·6), and 83·8% (83·6-84·9). The benefits of immediate initiation of ART, such as prolonged survival and AIDS-free survival and increased virological suppression, were small in this high-income setting with relatively low CD4 count at HIV diagnosis. The estimated beneficial effect on AIDS is less than in recently reported randomised trials. Increasing rates of HIV testing might be as important as a policy of early initiation of ART. National Institutes of Health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Caporali, E.; Chiarello, V.; Galeati, G.
2014-12-01
Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful tool to provide more robust estimation of the results obtained by direct statistical methods.
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds
Poreba, Martyna; Goulette, François
2015-01-01
With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Graham, Matthew; Suk, Jonathan E.; Takahashi, Saki; Metcalf, C. Jessica; Jimenez, A. Paez; Prikazsky, Vladimir; Ferrari, Matthew J.; Lessler, Justin
2018-01-01
Abstract. We report on and evaluate the process and findings of a real-time modeling exercise in response to an outbreak of measles in Lola prefecture, Guinea, in early 2015 in the wake of the Ebola crisis. Multiple statistical methods for the estimation of the size of the susceptible (i.e., unvaccinated) population were applied to weekly reported measles case data on seven subprefectures throughout Lola. Stochastic compartmental models were used to project future measles incidence in each subprefecture in both an initial and a follow-up iteration of forecasting. Measles susceptibility among 1- to 5-year-olds was estimated to be between 24% and 43% at the beginning of the outbreak. Based on this high baseline susceptibility, initial projections forecasted a large outbreak occurring over approximately 10 weeks and infecting 40 children per 1,000. Subsequent forecasts based on updated data mitigated this initial projection, but still predicted a significant outbreak. A catch-up vaccination campaign took place at the same time as this second forecast and measles cases quickly receded. Of note, case reports used to fit models changed significantly between forecast rounds. Model-based projections of both current population risk and future incidence can help in setting priorities and planning during an outbreak response. A swiftly changing situation on the ground, coupled with data uncertainties and the need to adjust standard analytical approaches to deal with sparse data, presents significant challenges. Appropriate presentation of results as planning scenarios, as well as presentations of uncertainty and two-way communication, is essential to the effective use of modeling studies in outbreak response. PMID:29532773
Graham, Matthew; Suk, Jonathan E; Takahashi, Saki; Metcalf, C Jessica; Jimenez, A Paez; Prikazsky, Vladimir; Ferrari, Matthew J; Lessler, Justin
2018-05-01
We report on and evaluate the process and findings of a real-time modeling exercise in response to an outbreak of measles in Lola prefecture, Guinea, in early 2015 in the wake of the Ebola crisis. Multiple statistical methods for the estimation of the size of the susceptible (i.e., unvaccinated) population were applied to weekly reported measles case data on seven subprefectures throughout Lola. Stochastic compartmental models were used to project future measles incidence in each subprefecture in both an initial and a follow-up iteration of forecasting. Measles susceptibility among 1- to 5-year-olds was estimated to be between 24% and 43% at the beginning of the outbreak. Based on this high baseline susceptibility, initial projections forecasted a large outbreak occurring over approximately 10 weeks and infecting 40 children per 1,000. Subsequent forecasts based on updated data mitigated this initial projection, but still predicted a significant outbreak. A catch-up vaccination campaign took place at the same time as this second forecast and measles cases quickly receded. Of note, case reports used to fit models changed significantly between forecast rounds. Model-based projections of both current population risk and future incidence can help in setting priorities and planning during an outbreak response. A swiftly changing situation on the ground, coupled with data uncertainties and the need to adjust standard analytical approaches to deal with sparse data, presents significant challenges. Appropriate presentation of results as planning scenarios, as well as presentations of uncertainty and two-way communication, is essential to the effective use of modeling studies in outbreak response.
Magma ocean formation due to giant impacts
NASA Technical Reports Server (NTRS)
Tonks, W. B.; Melosh, H. J.
1993-01-01
The thermal effects of giant impacts are studied by estimating the melt volume generated by the initial shock wave and corresponding magma ocean depths. Additionally, the effects of the planet's initial temperature on the generated melt volume are examined. The shock pressure required to completely melt the material is determined using the Hugoniot curve plotted in pressure-entropy space. Once the melting pressure is known, an impact melting model is used to estimate the radial distance melting occurred from the impact site. The melt region's geometry then determines the associated melt volume. The model is also used to estimate the partial melt volume. Magma ocean depths resulting from both excavated and retained melt are calculated, and the melt fraction not excavated during the formation of the crater is estimated. The fraction of a planet melted by the initial shock wave is also estimated using the model.
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
NASA Astrophysics Data System (ADS)
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY
Jeong, Hyeok; Townsend, Robert
2010-01-01
This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833
Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey
2015-12-01
The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.
Dynamic State Estimation and Parameter Calibration of DFIG based on Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Rui; Huang, Zhenyu; Wang, Shaobu
2015-07-30
With the growing interest in the application of wind energy, doubly fed induction generator (DFIG) plays an essential role in the industry nowadays. To deal with the increasing stochastic variations introduced by intermittent wind resource and responsive loads, dynamic state estimation (DSE) are introduced in any power system associated with DFIGs. However, sometimes this dynamic analysis canould not work because the parameters of DFIGs are not accurate enough. To solve the problem, an ensemble Kalman filter (EnKF) method is proposed for the state estimation and parameter calibration tasks. In this paper, a DFIG is modeled and implemented with the EnKFmore » method. Sensitivity analysis is demonstrated regarding the measurement noise, initial state errors and parameter errors. The results indicate this EnKF method has a robust performance on the state estimation and parameter calibration of DFIGs.« less
NASA Technical Reports Server (NTRS)
1978-01-01
The author has identified the following significant results. The initial CAS estimates, which were made for each month from April through August, were considerably higher than the USDA/SRS estimates. This was attributed to: (1) the practice of considering bare ground as potential wheat and counting it as wheat; (2) overestimation of the wheat proportions in segments having only a small amount of wheat; and (3) the classification of confusion crops as wheat. At the end of the season most of the segments were reworked using improved methods based on experience gained during the season. In particular, new procedures were developed to solve the three problems listed above. These and other improvements used in the rework experiment resulted in at-harvest estimates that were much closer to the USDA/SRS estimates than those obtained during the regular season.
Crack initiation modeling of a directionally-solidified nickel-base superalloy
NASA Astrophysics Data System (ADS)
Gordon, Ali Page
Combustion gas turbine components designed for application in electric power generation equipment are subject to periodic replacement as a result of cracking, damage, and mechanical property degeneration that render them unsafe for continued operation. In view of the significant costs associated with inspecting, servicing, and replacing damaged components, there has been much interest in developing models that not only predict service life, but also estimate the evolved microstructural state of the material. This thesis explains manifestations of microstructural damage mechanisms that facilitate fatigue crack nucleation in a newly-developed directionally-solidified (DS) Ni-base superalloy components exposed to elevated temperatures and high stresses. In this study, models were developed and validated for damage and life prediction using DS GTD-111 as the subject material. This material, proprietary to General Electric Energy, has a chemical composition and grain structure designed to withstand creep damage occurring in the first and second stage blades of gas-powered turbines. The service conditions in these components, which generally exceed 600°C, facilitate the onset of one or more damage mechanisms related to fatigue, creep, or environment. The study was divided into an empirical phase, which consisted of experimentally simulating service conditions in fatigue specimens, and a modeling phase, which entailed numerically simulating the stress-strain response of the material. Experiments have been carried out to simulate a variety of thermal, mechanical, and environmental operating conditions endured by longitudinally (L) and transversely (T) oriented DS GTD-111. Both in-phase and out-of-phase thermo-mechanical fatigue tests were conducted. In some cases, tests in extreme environments/temperatures were needed to isolate one or at most two of the mechanisms causing damage. Microstructural examinations were carried out via SEM and optical microscopy. A continuum crystal plasticity model was used to simulate the material behavior in the L and T orientations. The constitutive model was implemented in ABAQUS and a parameter estimation scheme was developed to obtain the material constants. A physically-based model was developed for correlating crack initiation life based on the experimental life data and predictions are made using the crack initiation model. Assuming a unique relationship between the damage fraction and cycle fraction with respect to cycles to crack initiation for each damage mode, the total crack initiation life has been represented in terms of the individual damage components (fatigue, creep-fatigue, creep, and oxidation-fatigue) observed at the end state of crack initiation.
Conceptual Design of a Communication-Based Deep Space Navigation Network
NASA Technical Reports Server (NTRS)
Anzalone, Evan J.; Chuang, C. H.
2012-01-01
As the need grows for increased autonomy and position knowledge accuracy to support missions beyond Earth orbit, engineers must push and develop more advanced navigation sensors and systems that operate independent of Earth-based analysis and processing. Several spacecraft are approaching this problem using inter-spacecraft radiometric tracking and onboard autonomous optical navigation methods. This paper proposes an alternative implementation to aid in spacecraft position fixing. The proposed method Network-Based Navigation technique takes advantage of the communication data being sent between spacecraft and between spacecraft and ground control to embed navigation information. The navigation system uses these packets to provide navigation estimates to an onboard navigation filter to augment traditional ground-based radiometric tracking techniques. As opposed to using digital signal measurements to capture inherent information of the transmitted signal itself, this method relies on the embedded navigation packet headers to calculate a navigation estimate. This method is heavily dependent on clock accuracy and the initial results show the promising performance of a notional system.
Tran, Linh; Yiannoutsos, Constantin T.; Musick, Beverly S.; Wools-Kaloustian, Kara K.; Siika, Abraham; Kimaiyo, Sylvester; van der Laan, Mark J.; Petersen, Maya
2017-01-01
In conducting studies on an exposure of interest, a systematic roadmap should be applied for translating causal questions into statistical analyses and interpreting the results. In this paper we describe an application of one such roadmap applied to estimating the joint effect of both time to availability of a nurse-based triage system (low risk express care (LREC)) and individual enrollment in the program among HIV patients in East Africa. Our study population is comprised of 16,513 subjects found eligible for this task-shifting program within 15 clinics in Kenya between 2006 and 2009, with each clinic starting the LREC program between 2007 and 2008. After discretizing follow-up into 90-day time intervals, we targeted the population mean counterfactual outcome (i. e. counterfactual probability of either dying or being lost to follow up) at up to 450 days after initial LREC eligibility under three fixed treatment interventions. These were (i) under no program availability during the entire follow-up, (ii) under immediate program availability at initial eligibility, but non-enrollment during the entire follow-up, and (iii) under immediate program availability and enrollment at initial eligibility. We further estimated the controlled direct effect of immediate program availability compared to no program availability, under a hypothetical intervention to prevent individual enrollment in the program. Targeted minimum loss-based estimation was used to estimate the mean outcome, while Super Learning was implemented to estimate the required nuisance parameters. Analyses were conducted with the ltmle R package; analysis code is available at an online repository as an R package. Results showed that at 450 days, the probability of in-care survival for subjects with immediate availability and enrollment was 0.93 (95% CI: 0.91, 0.95) and 0.87 (95% CI: 0.86, 0.87) for subjects with immediate availability never enrolling. For subjects without LREC availability, it was 0.91 (95% CI: 0.90, 0.92). Immediate program availability without individual enrollment, compared to no program availability, was estimated to slightly albeit significantly decrease survival by 4% (95% CI 0.03,0.06, p<0.01). Immediately availability and enrollment resulted in a 7 % higher in-care survival compared to immediate availability with non-enrollment after 450 days (95% CI−0.08,−0.05, p<0.01). The results are consistent with a fairly small impact of both availability and enrollment in the LREC program on incare survival. PMID:28736692
Kargupta, Roli; Puttaswamy, Sachidevi; Lee, Aiden J; Butler, Timothy E; Li, Zhongyu; Chakraborty, Sounak; Sengupta, Shramik
2017-06-10
Multiple techniques exist for detecting Mycobacteria, each having its own advantages and drawbacks. Among them, automated culture-based systems like the BACTEC-MGIT™ are popular because they are inexpensive, reliable and highly accurate. However, they have a relatively long "time-to-detection" (TTD). Hence, a method that retains the reliability and low-cost of the MGIT system, while reducing TTD would be highly desirable. Living bacterial cells possess a membrane potential, on account of which they store charge when subjected to an AC-field. This charge storage (bulk capacitance) can be estimated using impedance measurements at multiple frequencies. An increase in the number of living cells during culture is reflected in an increase in bulk capacitance, and this forms the basis of our detection. M. bovis BCG and M. smegmatis suspensions with differing initial loads are cultured in MGIT media supplemented with OADC and Middlebrook 7H9 media respectively, electrical "scans" taken at regular intervals and the bulk capacitance estimated from the scans. Bulk capacitance estimates at later time-points are statistically compared to the suspension's baseline value. A statistically significant increase is assumed to indicate the presence of proliferating mycobacteria. Our TTDs were 60 and 36 h for M. bovis BCG and 20 and 9 h for M. smegmatis with initial loads of 1000 CFU/ml and 100,000 CFU/ml respectively. The corresponding TTDs for the commercial BACTEC MGIT 960 system were 131 and 84.6 h for M. bovis BCG and 41.7 and 12 h for M smegmatis, respectively. Our culture-based detection method using multi-frequency impedance measurements is capable of detecting mycobacteria faster than current commercial systems.
One-way quantum computing in superconducting circuits
NASA Astrophysics Data System (ADS)
Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.
2018-03-01
We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.
Le, Thao N; Stockdale, Gary
2011-10-01
The purpose of this study was to examine the effects of school demographic factors and youth's perception of discrimination on delinquency in adolescence and into young adulthood for African American, Asian, Hispanic, and white racial/ethnic groups. Using data from the National Longitudinal Study of Adolescent Health (Add Health), models testing the effect of school-related variables on delinquency trajectories were evaluated for the four racial/ethnic groups using Mplus 5.21 statistical software. Results revealed that greater student ethnic diversity and perceived discrimination, but not teacher ethnic diversity, resulted in higher initial delinquency estimates at 13 years of age for all groups. However, except for African Americans, having a greater proportion of female teachers in the school decreased initial delinquency estimates. For African Americans and whites, a larger school size also increased the initial estimates. Additionally, lower social-economic status increased the initial estimates for whites, and being born in the United States increased the initial estimates for Asians and Hispanics. Finally, regardless of the initial delinquency estimate at age 13 and the effect of the school variables, all groups eventually converged to extremely low delinquency in young adulthood, at the age of 21 years. Educators and public policy makers seeking to prevent and reduce delinquency can modify individual risks by modifying characteristics of the school environment. Policies that promote respect for diversity and intolerance toward discrimination, as well as training to help teachers recognize the precursors and signs of aggression and/or violence, may also facilitate a positive school environment, resulting in lower delinquency. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abadie, J.; Abbott, B. P.; Abbott, R.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amador Ceron, E.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Aoudia, S.; Arain, M. A.; Araya, M.; Aronsson, M.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Atkinson, D. E.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barnum, S.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Beker, M. G.; Belczynski, K.; Benacquista, M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bigotta, S.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birindelli, S.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Blomberg, A.; Boccara, C.; Bock, O.; Bodiya, T. P.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Boyle, M.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Breyer, J.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Budzyński, R.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Burmeister, O.; Buskulic, D.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campagna, E.; Campsie, P.; Cannizzo, J.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C.; Carbognani, F.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande Mottin, E.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R.; Corda, C.; Cornish, N.; Corsi, A.; Costa, C. A.; Coulon, J. P.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Dari, A.; Das, K.; Dattilo, V.; Daudert, B.; Davier, M.; Davies, G.; Davis, A.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Degallaix, J.; del Prete, M.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Devanka, P.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Emilio, M. Di Paolo; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Dorsher, S.; Douglas, E. S. D.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Dueck, J.; Dumas, J. C.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Engel, R.; Etzel, T.; Evans, M.; Evans, T.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flanigan, M.; Flasch, K.; Foley, S.; Forrest, C.; Forsi, E.; Fotopoulos, N.; Fournier, J. D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gammaitoni, L.; Garofoli, J. A.; Garufi, F.; Gemme, G.; Genin, E.; Gennai, A.; Gholami, I.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gill, C.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hall, P.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Heefner, J.; Heitmann, H.; Hello, P.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Howell, E.; Hoyland, D.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, H.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kowalska, I.; Kozak, D.; Krause, T.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Królak, A.; Kuehn, G.; Kullman, J.; Kumar, R.; Kwee, P.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lazzarini, A.; Leaci, P.; Leong, J.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Lin, H.; Lindquist, P. E.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Luan, J.; Lubiński, M.; Lucianetti, A.; Lück, H.; Lundgren, A.; Machenschalk, B.; MacInnis, M.; Mackowski, J. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Mak, C.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIvor, G.; McKechan, D. J. A.; Meadors, G.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Merill, L.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mino, Y.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreau, J.; Moreno, G.; Morgado, N.; Morgia, A.; Morioka, T.; Mors, K.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; MowLowry, C.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Nash, T.; Nawrodt, R.; Nelson, J.; Neri, I.; Newton, G.; Nishizawa, A.; Nocera, F.; Nolting, D.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; Oldenburg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Pardi, S.; Pareja, M.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Postiglione, F.; Prato, M.; Predoi, V.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabaste, O.; Rabeling, D. S.; Radke, T.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Röver, C.; Rogstad, S.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sakata, S.; Sakosky, M.; Salemi, F.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santostasi, G.; Saraf, S.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Stein, A. J.; Stein, L. C.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Trummer, J.; Tseng, K.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vaishnav, B.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Veggel, A. A.; Vass, S.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yeaton-Massey, D.; Yoshida, S.; Yu, P. P.; Yvert, M.; Zanolin, M.; Zhang, L.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2010-09-01
We present an up-to-date, comprehensive summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo. Astrophysical estimates for compact-binary coalescence rates depend on a number of assumptions and unknown model parameters and are still uncertain. The most confident among these estimates are the rate predictions for coalescing binary neutron stars which are based on extrapolations from observed binary pulsars in our galaxy. These yield a likely coalescence rate of 100 Myr-1 per Milky Way Equivalent Galaxy (MWEG), although the rate could plausibly range from 1 Myr-1 MWEG-1 to 1000 Myr-1 MWEG-1 (Kalogera et al 2004 Astrophys. J. 601 L179; Kalogera et al 2004 Astrophys. J. 614 L137 (erratum)). We convert coalescence rates into detection rates based on data from the LIGO S5 and Virgo VSR2 science runs and projected sensitivities for our advanced detectors. Using the detector sensitivities derived from these data, we find a likely detection rate of 0.02 per year for Initial LIGO-Virgo interferometers, with a plausible range between 2 × 10-4 and 0.2 per year. The likely binary neutron-star detection rate for the Advanced LIGO-Virgo network increases to 40 events per year, with a range between 0.4 and 400 per year.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
Dual permeability FEM models for distributed fiber optic sensors development
NASA Astrophysics Data System (ADS)
Aguilar-López, Juan Pablo; Bogaard, Thom
2017-04-01
Fiber optic cables are commonly known for being robust and reliable mediums for transferring information at the speed of light in glass. Billions of kilometers of cable have been installed around the world for internet connection and real time information sharing. Yet, fiber optic cable is not only a mean for information transfer but also a way to sense and measure physical properties of the medium in which is installed. For dike monitoring, it has been used in the past for detecting inner core and foundation temperature changes which allow to estimate water infiltration during high water events. The DOMINO research project, aims to develop a fiber optic based dike monitoring system which allows to directly sense and measure any pore pressure change inside the dike structure. For this purpose, questions like which location, how many sensors, which measuring frequency and which accuracy are required for the sensor development. All these questions may be initially answered with a finite element model which allows to estimate the effects of pore pressure change in different locations along the cross section while having a time dependent estimation of a stability factor. The sensor aims to monitor two main failure mechanisms at the same time; The piping erosion failure mechanism and the macro-stability failure mechanism. Both mechanisms are going to be modeled and assessed in detail with a finite element based dual permeability Darcy-Richards numerical solution. In that manner, it is possible to assess different sensing configurations with different loading scenarios (e.g. High water levels, rainfall events and initial soil moisture and permeability conditions). The results obtained for the different configurations are later evaluated based on an entropy based performance evaluation. The added value of this kind of modelling approach for the sensor development is that it allows to simultaneously model the piping erosion and macro-stability failure mechanisms in a time dependent manner. In that way, the estimated pore pressures may be related to the monitored one and to both failure mechanisms. Furthermore, the approach is intended to be used in a later stage for the real time monitoring of the failure.
The maximum economic depth of groundwater abstraction for irrigation
NASA Astrophysics Data System (ADS)
Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.
2017-12-01
Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.
Directional Canopy Emissivity Estimation Based on Spectral Invariants
NASA Astrophysics Data System (ADS)
Guo, M.; Cao, B.; Ren, H.; Yongming, D.; Peng, J.; Fan, W.
2017-12-01
Land surface emissivity is a crucial parameter for estimating land surface temperature from remote sensing data and also plays an important role in the physical process of surface energy and water balance from local to global scales. To our knowledge, the emissivity varies with surface type and cover. As for the vegetation, its canopy emissivity is dependent on vegetation types, viewing zenith angle and structure that changes in different growing stages. Lots of previous studies have focused on the emissivity model, but few of them are analytic and suited to different canopy structures. In this paper, a new physical analytic model is proposed to estimate the directional emissivity of homogenous vegetation canopy based on spectral invariants. The initial model counts the directional absorption in six parts: the direct absorption of the canopy and the soil, the absorption of the canopy and soil after a single scattering and after multiple scattering within the canopy-soil system. In order to analytically estimate the emissivity, the pathways of photons absorbed in the canopy-soil system are traced using the re-collision probability in Fig.1. After sensitive analysis on the above six absorptions, the initial complicated model was further simplified as a fixed mathematic expression to estimate the directional emissivity for vegetation canopy. The model was compared with the 4SAIL model, FRA97 model, FRA02 model and DART model in Fig.2, and the results showed that the FRA02 model is significantly underestimated while the FRA97 model is a little underestimated, on basis of the new model. On the contrary, the emissivity difference between the new model with the 4SAIL model and DART model was found to be less than 0.002. In general, since the new model has the advantages of mathematic expression with accurate results and clear physical meaning, the model is promising to be extended to simulate the directional emissivity for the discrete canopy in further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Chaopeng; Fang, Kuai; Ludwig, Noel
The DOE and BLM identified 285,000 acres of desert land in the Chuckwalla valley in the western U.S., for solar energy development. In addition to several approved solar projects, a pumped storage project was recently proposed to pump nearly 8000 acre-ft-yr of groundwater to store and stabilize solar energy output. This study aims at providing estimates of the amount of naturally-occurring recharge, and to estimate the impact of the pumping on the water table. To better provide the locations and intensity of natural recharge, this study employs an integrated, physically-based hydrologic model, PAWS+CLM, to calculate recharge. Then, the simulated rechargemore » is used in a parameter estimation package to calibrate spatially-distributed K field. This design is to incorporate all available observational data, including soil moisture monitoring stations, groundwater head, and estimates of groundwater conductivity, to constrain the modeling. To address the uncertainty of the soil parameters, an ensemble of simulations are conducted, and the resulting recharges are either rejected or accepted based on calibrated groundwater head and local variation of the K field. The results indicate that the natural total inflow to the study domain is between 7107 and 12,772 afy. During the initial-fill phase of pumped storage project, the total outflow exceeds the upper bound estimate of the inflow. If the initial-fill is annualized to 20 years, the average pumping is more than the lower bound of inflows. The results indicate after adding the pumped storage project, the system will nearing, if not exceeding, its maximum renewable pumping capacity. The accepted recharges lead to a drawdown range of 24 to 45 ft for an assumed specific yield of 0.05. However, the drawdown is sensitive to this parameter, whereas there is insufficient data to adequately constrain this parameter.« less
Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor
2014-01-01
Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277
Numerical study of viscous dissipation during single drop impact on wetted surfaces
NASA Astrophysics Data System (ADS)
An, Yi; Yang, Shihao; Liu, Qingquan
2017-11-01
The splashing crown by the impact of a drop on a liquid film has been studied extensively since Yarin and Weiss (JFM 1995). The motion of the crown base is believed to be kinematic which results in the equation R =(2/3H)1/4(T-T0)1/2. This equation is believed to overestimate the crown size by about 15%. While Trojillo and Lee (PoF 2001) find the influence of the Re not notable. Considering the dissipation in the initial stage of the impact, Gao and Li (PRE, 2015) obtained a well-validated equation. However, how to estimate the dissipation is still worth some detailed discussion. We carried out a series of VOF simulations with special focusing on the influence of viscosity. The simulation is based on the Basilisk code to utilize adaptive mesh refinement. We found that the role of dissipation could be divided into three stages. When T
Using the knowledge-to-action framework to guide the timing of dialysis initiation.
Sood, Manish M; Manns, Braden; Nesrallah, Gihad
2014-05-01
The optimal time at which to initiate chronic dialysis remains unknown. Using a contemporary knowledge translation approach (the knowledge-to-action framework), a pan-Canadian collaboration (CANN-NET) set out to study the scope of the problem, then develop and disseminate evidence-based guidelines addressing the timing of dialysis initiation. The purpose of this review is to summarize the key findings and describe the planned Canadian knowledge translation strategy for improving knowledge and practices pertaining to the timing dialysis initiation. New research has provided considerable insights regarding the initiation of dialysis. A Canadian cohort study identified significant variation in the estimated glomerular filtration rate level at dialysis initiation, and a survey of providers identified related knowledge gaps that might be amenable to knowledge translation interventions. A recent knowledge synthesis/guideline concluded that early dialysis initiation is costly, and provides no measureable clinical benefits. A systematic knowledge translation intervention including a multifaceted approach may aid in reducing variation in practice and improving the quality of care. Utilizing the knowledge-to-action framework, we identified practice variation and key barriers to the optimal timing for dialysis initiation that may be amenable to knowledge translation strategies.
Sayed, Mohammed E; Porwal, Amit; Al-Faraj, Nida A; Bajonaid, Amal M; Sumayli, Hassan A
2017-07-01
Several techniques and methods have been proposed to estimate the anterior teeth dimensions in edentulous patients. However, this procedure remains challenging especially when preextraction records are not available. Therefore, the purpose of this study is to evaluate some of the existing extraoral and intraoral methods for estimation of anterior tooth dimensions and to propose a novel method for estimation of central incisor width (CIW) and length (CIL) for Saudi population. Extraoral and intraoral measurements were recorded for a total of 236 subjects. Descriptive statistical analysis and Pearson's correlation tests were performed. Association was evaluated between combined anterior teeth width (CATW) and interalar width (IAW), intercommisural width (ICoW) and interhamular notch distance (IHND) plus 10 mm. Evaluation of the linear relationship between central incisor length (CIL) with facial height (FH) and CIW with bizygomatic width (BZW) was also performed. Significant correlation was found between the CATW and ICoW and IAW (p-values <0.0001); however, no correlation was found relative to IHND plus 10 mm (p-value = 0.456). Further, no correlation was found between the FH and right CIL and BZW and right CIW (p-values = 0.255 and 0.822). The means of CIL, CIW, incisive papillae-fovea palatinae (IP-FP), and IHND were used to estimate the central incisor dimensions: CIL = FP-IP distance/4.45, CIW = IHND/4.49. It was concluded that the ICoW and IAW measurements are the only predictable methods to estimate the initial reference value for CATW. A proposed intraoral approach was hypothesized for estimation of CIW and CIL for the given population. Based on the results of the study, ICoW and IAW measurements can be useful in estimating the initial reference value for CATW, while the proposed novel approach using specific palatal dimensions can be used for estimating the width and length of central incisors. These methods are crucial to obtain esthetic treatment results within the parameters of the given population.
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
Automatic portion estimation and visual refinement in mobile dietary assessment
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2011-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198
Automatic portion estimation and visual refinement in mobile dietary assessment
NASA Astrophysics Data System (ADS)
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2010-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.
ERIC Educational Resources Information Center
McGillivray, Jane A.; Kershaw, Mavis M.
2013-01-01
It has been estimated that people with ID experience the same and possibly higher levels of depression than the general population. Referral to a General Medical Practitioner (GP) for primary care is recommended practice for people with depression and cognitive behavioural (CB) therapy is now an accepted evidence based intervention. A growing body…
NASA Astrophysics Data System (ADS)
Wasserthal, Christian; Engel, Karin; Rink, Karsten; Brechmann, Andr'e.
We propose an automatic procedure for the correct segmentation of grey and white matter in MR data sets of the human brain. Our method exploits general anatomical knowledge for the initial segmentation and for the subsequent refinement of the estimation of the cortical grey matter. Our results are comparable to manual segmentations.
Single bubble of an electronegative gas in transformer oil in the presence of an electric field
NASA Astrophysics Data System (ADS)
Gadzhiev, M. Kh.; Tyuftyaev, A. S.; Il'ichev, M. V.
2017-10-01
The influence of the electric field on a single air bubble in transformer oil has been studied. It has been shown that, depending on its size, the bubble may initiate breakdown. The sizes of air and sulfur hexafluoride bubbles at which breakdown will not be observed have been estimated based on the condition for the avalanche-to-streamer transition.
High Temperature Chemistry in the Columbia Accident Investigation
NASA Technical Reports Server (NTRS)
Jacobson, Nathan; Opila, Elizabeth; Tallant, David; Simpson, Regina
2004-01-01
Initial estimates on the temperature and conditions of the breach in Columbia's wing focused on analyses of the slag deposits. These deposits are complex mixtures of the reinforced carbon/carbon (RCC) constituents, insulation material, and wing structural materials. However it was possible to clearly discern melted/solidified Cerachrome(R) insulation, indicating the temperatures had exceeded 1760 C. Current research focuses on the carbon/carbon in the path from the breach. Carbon morphology indicates heavy oxidation and erosion. Raman spectroscopy yielded further temperature estimates. A technique developed at Sandia National Laboratories is based on crystallite size in carbon chars. Lower temperatures yield nanocrystalline graphite; whereas higher temperatures yield larger graphite crystals. By comparison to standards the temperatures on the recovered RCC fragments were estimated to have been greater than 2700 C.
Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study
NASA Astrophysics Data System (ADS)
Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom
2018-02-01
This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.
Subirats, Xavier; Bosch, Elisabeth; Rosés, Martí
2007-01-05
The use of methanol-aqueous buffer mobile phases in HPLC is a common election when performing chromatographic separations of ionisable analytes. The addition of methanol to the aqueous buffer to prepare such a mobile phase changes the buffer capacity and the pH of the solution. In the present work, the variation of these buffer properties is studied for acetic acid-acetate, phosphoric acid-dihydrogenphosphate-hydrogenphosphate, citric acid-dihydrogencitrate-hydrogencitrate-citrate, and ammonium-ammonia buffers. It is well established that the pH change of the buffers depends on the initial concentration and aqueous pH of the buffer, on the percentage of methanol added, and on the particular buffer used. The proposed equations allow the pH estimation of methanol-water buffered mobile phases up to 80% in volume of organic modifier from initial aqueous buffer pH and buffer concentration (before adding methanol) between 0.001 and 0.01 mol L(-1). From both the estimated pH values of the mobile phase and the estimated pKa of the ionisable analytes, it is possible to predict the degree of ionisation of the analytes and therefore, the interpretation of acid-base analytes behaviour in a particular methanol-water buffered mobile phase.
Mangen, Marie-Josée J.; Plass, Dietrich; Havelaar, Arie H.; Gibbons, Cheryl L.; Cassini, Alessandro; Mühlberger, Nikolai; van Lier, Alies; Haagsma, Juanita A.; Brooke, R. John; Lai, Taavi; de Waure, Chiara; Kramarz, Piotr; Kretzschmar, Mirjam E. E.
2013-01-01
In 2009, the European Centre for Disease Prevention and Control initiated the ‘Burden of Communicable Diseases in Europe (BCoDE)’ project to generate evidence-based and comparable burden-of-disease estimates of infectious diseases in Europe. The burden-of-disease metric used was the Disability-Adjusted Life Year (DALY), composed of years of life lost due to premature death (YLL) and due to disability (YLD). To better represent infectious diseases, a pathogen-based approach was used linking incident cases to sequelae through outcome trees. Health outcomes were included if an evidence-based causal relationship between infection and outcome was established. Life expectancy and disability weights were taken from the Global Burden of Disease Study and alternative studies. Disease progression parameters were based on literature. Country-specific incidence was based on surveillance data corrected for underestimation. Non-typhoidal Salmonella spp. and Campylobacter spp. were used for illustration. Using the incidence- and pathogen-based DALY approach the total burden for Salmonella spp. and Campylobacter spp. was estimated at 730 DALYs and at 1,780 DALYs per year in the Netherlands (average of 2005–2007). Sequelae accounted for 56% and 82% of the total burden of Salmonella spp. and Campylobacter spp., respectively. The incidence- and pathogen-based DALY methodology allows in the case of infectious diseases a more comprehensive calculation of the disease burden as subsequent sequelae are fully taken into account. Not considering subsequent sequelae would strongly underestimate the burden of infectious diseases. Estimates can be used to support prioritisation and comparison of infectious diseases and other health conditions, both within a country and between countries. PMID:24278167
NASA Technical Reports Server (NTRS)
Woods, Andrew W.; Self, Stephen
1992-01-01
Satellite images of large volcanic explosions reveal that the tops of volcanic eruptions columns are much cooler than the surrounding atmosphere. It is proposed that this effect occurs whenever a mixture of hot volcanic ash and entrained air ascends sufficiently high into a stably stratified atmosphere. Although the mixture is initially very hot, it expands and cools as the ambient pressure decreases. It is shown that cloud-top undercoolings in excess of 20 C may develop in clouds that penetrate the stratosphere, and it is predicted that, for a given cloud-top temperature, variations in the initial temperature of 100-200 C may correspond to variations in the column height of 5-10 km. It is deduced that the present practice of converting satellite-based measurements of the temperature at the top of volcanic eruptions columns to estimates of the column height will produce rather inaccurate results and should therefore be discontinued.
NASA Astrophysics Data System (ADS)
SchläPfer, Felix; Witzig, Pieter-Jan
2006-12-01
In 1997, about 140,000 citizens in 388 voting districts in the Swiss canton of Bern passed a ballot initiative to allocate about 3 million Swiss Francs annually to a canton-wide river restoration program. Using the municipal voting returns and a detailed georeferenced data set on the ecomorphological status of the rivers, we estimate models of voter support in relation to local river ecomorphology, population density, mean income, cultural background, and recent flood damage. Support of the initiative increased with increasing population density and tended to increase with increasing mean income, in spite of progressive taxation. Furthermore, we found evidence that public support increased with decreasing "naturalness" of local rivers. The model estimates may be cautiously used to predict the public acceptance of similar restoration programs in comparable regions. Moreover, the voting-based insights into the distribution of river restoration benefits provide a useful starting point for debates about appropriate financing schemes.
Park, Sun-Kyeong; Park, Seung-Hoo; Lee, Min-Young; Park, Ji-Hyun; Jeong, Jae-Hong; Lee, Eui-Kyung
2016-11-01
In south Korea, the price of biologics has been decreasing owing to patent expiration and the availability of biosimilars. This study evaluated the cost-effectiveness of a treatment strategy initiated with etanercept (ETN) compared with leflunomide (LFN) after a 30% reduction in the medication cost of ETN in patients with active rheumatoid arthritis (RA) with an inadequate response to methotrexate (MTX-IR). A cohort-based Markov model was designed to evaluate the lifetime cost-effectiveness of treatment sequence initiated with ETN (A) compared with 2 sequences initiated with LFN: LFN-ETN sequence (B) and LFN sequence (C). Patients transited through the treatment sequences, which consisted of sequential biologics and palliative therapy, based on American College of Rheumatology (ACR) responses and the probability of discontinuation. A systematic literature review and a network meta-analysis were conducted to estimate ACR responses to ETN and LFN. Utility was estimated by mapping an equation for converting the Health Assessment Questionnaire-Disability Index score to utility weight. The costs comprised medications, outpatient visits, administration, dispensing, monitoring, palliative therapy, and treatment for adverse events. A subanalysis was conducted to identify the influence of the ETN price reduction compared with the unreduced price, and sensitivity analyses explored the uncertainty of model parameters and assumptions. The ETN sequence (A) was associated with higher costs and a gain in quality-adjusted life years (QALYs) compared with both sequences initiated with LFN (B, C) throughout the lifetime of patients with RA and MTX-IR. The incremental cost-effectiveness ratio (ICER) for strategy A versus B was ₩13,965,825 (US$1726) per QALY and that for strategy A versus C was ₩9,587,983 (US$8050) per QALY. The results indicated that strategy A was cost-effective based on the commonly cited ICER threshold of ₩20,000,000 (US$16,793) per QALY in South Korea. The robustness of the base-case analysis was confirmed using sensitivity analyses. When the unreduced medication cost of ETN was applied in a subanalysis, the ICER for strategy A versus B was ₩20,909,572 (US$17,556) per QALY and that for strategy A versus C was ₩22,334,713 (US$18,753) per QALY. This study indicated that a treatment strategy initiated with ETN was more cost-effective in patients with active RA and MTX-IR than 2 sequences initiated with LFN. The results also indicate that the reduced price of ETN affected the cost-effectiveness associated with its earlier use. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Jarman, Kenneth D.; Xu, Zhijie
This report describes our initial research to quantify uncertainties in the identification and characterization of possible attack states in a network. As a result, we should be able to estimate the current state in which the network is operating, based on a wide variety of network data, and attach a defensible measure of confidence to these state estimates. The output of this research will be new uncertainty quantification (UQ) methods to help develop a process for model development and apply UQ to characterize attacks/adversaries, create an understanding of the degree to which methods scale to "big" data, and offer methodsmore » for addressing model approaches with regard to validation and accuracy.« less
The Shannon entropy as a measure of diffusion in multidimensional dynamical systems
NASA Astrophysics Data System (ADS)
Giordano, C. M.; Cincotta, P. M.
2018-05-01
In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.
Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato
2018-05-01
We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.
System and method for quench and over-current protection of superconductor
Huang, Xianrui; Laskaris, Evangelos Trifon; Sivasubramaniam, Kiruba Haran; Bray, James William; Ryan, David Thomas; Fogarty, James Michael; Steinbach, Albert Eugene
2005-05-31
A system and method for protecting a superconductor. The system may comprise a current sensor operable to detect a current flowing through the superconductor. The system may comprise a coolant temperature sensor operable to detect the temperature of a cryogenic coolant used to cool the superconductor to a superconductive state. The control circuit is operable to estimate the superconductor temperature based on the current flow and the coolant temperature. The system may also be operable to compare the estimated superconductor temperature to at least one threshold temperature and to initiate a corrective action when the superconductor temperature exceeds the at least one threshold temperature.
Ridderstråle, Martin
2017-01-01
Background: Depending on available resources, competencies, and pedagogic preference, initiation of insulin pump therapy can be performed on either an individual or a group basis. Here we compared the two models with respect to resources used. Methods: Time-driven activity-based costing (TDABC) was used to compare initiating insulin pump treatment in groups (GT) to individual treatment (IT). Activities and cost drivers were identified, timed, or estimated at location. Medical quality and patient satisfaction were assumed to be noninferior and were not measured. Results: GT was about 30% less time-consuming and 17% less cost driving per patient and activity compared to IT. As a batch driver (16 patients in one group) GT produced an upward jigsaw-shaped accumulative cost curve compared to the incremental increase incurred by IT. Taking the alternate cost for those not attending into account, and realizing the cost of opportunity gained, suggested that GT was cost neutral already when 5 of 16 patients attended, and that a second group could be initiated at no additional cost as the attendance rate reached 15:1. Conclusions: We found TDABC to be effective in comparing treatment alternatives, improving cost control and decision making. Everything else being equal, if the setup is available, our data suggest that initiating insulin pump treatment in groups is far more cost effective than on an individual basis and that TDABC may be used to find the balance point. PMID:28366085
Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.
Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David
2008-04-01
A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.
Lambert, Shea M; Reeder, Tod W; Wiens, John J
2015-01-01
Simulation studies suggest that coalescent-based species-tree methods are generally more accurate than concatenated analyses. However, these species-tree methods remain impractical for many large datasets. Thus, a critical but unresolved issue is when and why concatenated and coalescent species-tree estimates will differ. We predict such differences for branches in concatenated trees that are short, weakly supported, and have conflicting gene trees. We test these predictions in Scincidae, the largest lizard family, with data from 10 nuclear genes for 17 ingroup taxa and 44 genes for 12 taxa. We support our initial predictions, andsuggest that simply considering uncertainty in concatenated trees may sometimes encompass the differences between these methods. We also found that relaxed-clock concatenated trees can be surprisingly similar to the species-tree estimate. Remarkably, the coalescent species-tree estimates had slightly lower support values when based on many more genes (44 vs. 10) and a small (∼30%) reduction in taxon sampling. Thus, taxon sampling may be more important than gene sampling when applying species-tree methods to deep phylogenetic questions. Finally, our coalescent species-tree estimates tentatively support division of Scincidae into three monophyletic subfamilies, a result otherwise found only in concatenated analyses with extensive species sampling. Copyright © 2014 Elsevier Inc. All rights reserved.
The Cost of Blindness in the Republic of Ireland 2010-2020.
Green, D; Ducorroy, G; McElnea, E; Naughton, A; Skelly, A; O'Neill, C; Kenny, D; Keegan, D
2016-01-01
Aims. To estimate the prevalence of blindness in the Republic of Ireland and the associated financial and total economic cost between 2010 and 2020. Methods. Estimates for the prevalence of blindness in the Republic of Ireland were based on blindness registration data from the National Council for the Blind of Ireland. Estimates for the financial and total economic cost of blindness were based on the sum of direct and indirect healthcare and nonhealthcare costs. Results. We estimate that there were 12,995 blind individuals in Ireland in 2010 and in 2020 there will be 17,997. We estimate that the financial and total economic costs of blindness in the Republic of Ireland in 2010 were €276.6 million and €809 million, respectively, and will increase in 2020 to €367 million and €1.1 billion, respectively. Conclusions. Here, ninety-eight percent of the cost of blindness is borne by the Departments of Social Protection and Finance and not by the Department of Health as might initially be expected. Cost of illness studies should play a role in public policy making as they help to quantify the indirect or "hidden" costs of disability and so help to reveal the true cost of illness.
The Cost of Blindness in the Republic of Ireland 2010–2020
Green, D.; Ducorroy, G.; McElnea, E.; Naughton, A.; Skelly, A.; O'Neill, C.; Kenny, D.; Keegan, D.
2016-01-01
Aims. To estimate the prevalence of blindness in the Republic of Ireland and the associated financial and total economic cost between 2010 and 2020. Methods. Estimates for the prevalence of blindness in the Republic of Ireland were based on blindness registration data from the National Council for the Blind of Ireland. Estimates for the financial and total economic cost of blindness were based on the sum of direct and indirect healthcare and nonhealthcare costs. Results. We estimate that there were 12,995 blind individuals in Ireland in 2010 and in 2020 there will be 17,997. We estimate that the financial and total economic costs of blindness in the Republic of Ireland in 2010 were €276.6 million and €809 million, respectively, and will increase in 2020 to €367 million and €1.1 billion, respectively. Conclusions. Here, ninety-eight percent of the cost of blindness is borne by the Departments of Social Protection and Finance and not by the Department of Health as might initially be expected. Cost of illness studies should play a role in public policy making as they help to quantify the indirect or “hidden” costs of disability and so help to reveal the true cost of illness. PMID:26981276
NASA Technical Reports Server (NTRS)
Achuthavarier, Deepthi; Koster, Randal; Marshak, Jelena; Schubert, Siegfried; Molod, Andrea
2018-01-01
In this study, we examine the prediction skill and predictability of the Madden Julian Oscillation (MJO) in a recent version of the NASA GEOS-5 atmosphere-ocean coupled model run at at 1/2 degree horizontal resolution. The results are based on a suite of hindcasts produced as part of the NOAA SubX project, consisting of seven ensemble members initialized every 5 days for the period 1999-2015. The atmospheric initial conditions were taken from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), and the ocean and the sea ice were taken from a GMAO ocean analysis. The land states were initialized from the MERRA-2 land output, which is based on observation-corrected precipitation fields. We investigated the MJO prediction skill in terms of the bivariate correlation coefficient for the real-time multivariate MJO (RMM) indices. The correlation coefficient stays at or above 0.5 out to forecast lead times of 26-36 days, with a pronounced increase in skill for forecasts initialized from phase 3, when the MJO convective anomaly is located in the central tropical Indian Ocean. A corresponding estimate of the upper limit of the predictability is calculated by considering a single ensemble member as the truth and verifying the ensemble mean of the remaining members against that. The predictability estimates fall between 35-37 days (taken as forecast lead when the correlation reaches 0.5) and are rather insensitive to the initial MJO phase. The model shows slightly higher skill when the initial conditions contain strong MJO events compared to weak events, although the difference in skill is evident only from lead 1 to 20. Similar to other models, the RMM-index-based skill arises mostly from the circulation components of the index. The skill of the convective component of the index drops to 0.5 by day 20 as opposed to day 30 for circulation fields. The propagation of the MJO anomalies over the Maritime Continent does not appear problematic in the GEOS-5 hindcasts implying that the Maritime Continent predictability barrier may not be a major concern in this model. Finally, the MJO prediction skill in this version of GEOS-5 is superior to that of the current seasonal prediction system at the GMAO; this could be partly attributed to a slightly better representation of the MJO in the free running version of this model and partly to the improved atmospheric initialization from MERRA-2.
Parent-child communication and marijuana initiation: evidence using discrete-time survival analysis.
Nonnemaker, James M; Silber-Ashley, Olivia; Farrelly, Matthew C; Dench, Daniel
2012-12-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or - in the case of youth reports of communication - potentially harmful (leading to increased likelihood of marijuana initiation). Copyright © 2012 Elsevier Ltd. All rights reserved.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
A pose estimation method for unmanned ground vehicles in GPS denied environments
NASA Astrophysics Data System (ADS)
Tamjidi, Amirhossein; Ye, Cang
2012-06-01
This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.
New formulations for tsunami runup estimation
NASA Astrophysics Data System (ADS)
Kanoglu, U.; Aydin, B.; Ceylan, N.
2017-12-01
We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).
NASA Astrophysics Data System (ADS)
Richards, D. A.; Nita, D. C.; Moseley, G. E.; Hoffmann, D. L.; Standish, C. D.; Smart, P. L.; Edwards, R.
2013-12-01
In addition to the many U-Th dated speleothem records (δ18O δ13C, trace elements) of past environmental change based on continuous phases of calcite growth, discontinuous records also provide important constraints for a wide range of past states of the Earth system, including sea levels, permafrost extent, regional aridity and local cave flooding. Chronological information about human activity or faunal evolution can also be obtained where calcite can be seen to overlie cave art or mammalian bones, for example. Among the important considerations when determining the U-Th age of calcite that nucleates on an exposed surface are (1) initial 230Th/232Th, which can be elevated and variable in some settings, and (2) growth rate and sub-sample density, where extrapolation is required. By way of example, we present sea level data based on U-Th ages of vadose speleothems (i.e. formed above the water table and distinct from 'phreatic' examples) from caves of the circum-Caribbean , where calcite growth was interrupted by rising sea levels and then reinitiated after regression. These estimates demand large corrections and derived sea level constraints are compared with alternative data from coral reef terraces, phreatic overgrowths on speleothems or indirect, proxy evidence from oxygen isotopes to constrain rates of ice volume growth. Flowstones from the Bahamas provide useful sea level constraints because they present the longest and most continuous records in such settings (a function of preservation potential in addition to hydrological routing) and also earliest growth post-emergence after sea level fall. We revisit estimates for sea level regression at the end of MIS 5 at ~ 80 ka (Richards et al, 1994; Lundberg and Ford, 1994) and make corrections for non-Bulk Earth initial Th contamination (230Th/232Th activity ratio > 10), based on isochron analysis of alternative stalagmites from the same settings and recent high resolution analysis. We also present new U-Th ages for contiguous layers sub-sampled from the first 2-3 mm of flowstone growth after the MIS 5 hiatus, using a sub-sample milling strategy that matches spatial resolution with maximum achievable precision (ThermoFinnigan Neptune MC-ICPMS methodology; 20-30 mg calcite, U = ~ 300 ng.g-1, 2σ age uncertainty is × 600 a at ~80 ka). Isochron methods are used to estimate the range of initial 230Th/232Th ratio and are compared with elevated values obtained from stalagmites from the same cave (Beck et al, 2001; Hoffmann et al, 2010). A similar strategy is presented for a stalagmite with much faster axial growth data, and the data are combined with additional sea level information from the same region to estimate the rate and uncertainty of sea level regression at the MIS stage 5/4 boundary. Elevated initial 230Th/232Th values have also been observed in a stalagmite from 6 m below present sea level in a cenote from the Yucatan, Mexico, where 5 phases of calcite between 10 and 5.5 ka are separated by serpulid worm tubes formed during periods of submergence. The transition between each phase provides constraints on age and elevation of relative sea level, but the former is hampered by the uncertainty of the high initial 230Th/232Th correction. We consider the possible sources of elevated Th ratios: hydrogenous, colloidal and carbonate or other detrital components.
Schutz, Yves; Byrne, Nuala M.; Dulloo, Abdul; Hills, Andrew P.
2014-01-01
The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the ‘efficiency’ with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty. PMID:24457473
Monte Carlo based NMR simulations of open fractures in porous media
NASA Astrophysics Data System (ADS)
Lukács, Tamás; Balázs, László
2014-05-01
According to the basic principles of nuclear magnetic resonance (NMR), a measurement's free induction decay curve has an exponential characteristic and its parameter is the transversal relaxation time, T2, given by the Bloch equations in rotating frame. In our simulations we are observing that particular case when the bulk's volume is neglectable to the whole system, the vertical movement is basically zero, hence the diffusion part of the T2 relation can be editted out. This small-apertured situations are common in sedimentary layers, and the smallness of the observed volume enable us to calculate with just the bulk relaxation and the surface relaxation. The simulation uses the Monte-Carlo method, so it is based on a random-walk generator which provides the brownian motions of the particles by uniformly distributed, pseudorandom generated numbers. An attached differential equation assures the bulk relaxation, the initial and the iterated conditions guarantee the simulation's replicability and enable having consistent estimations. We generate an initial geometry of a plain segment with known height, with given number of particles, the spatial distribution is set to equal to each simulation, and the surface-volume ratio remains at a constant value. It follows that to the given thickness of the open fracture, from the fitted curve's parameter, the surface relaxivity is determinable. The calculated T2 distribution curves are also indicating the inconstancy in the observed fracture situations. The effect of varying the height of the lamina at a constant diffusion coefficient also produces characteristic anomaly and for comparison we have run the simulation with the same initial volume, number of particles and conditions in spherical bulks, their profiles are clear and easily to understand. The surface relaxation enables us to estimate the interaction beetwen the materials of boundary with this two geometrically well-defined bulks, therefore the distribution takes as a basis in estimation of the porosity and can be use of identifying small-grained porous media.
Schutz, Yves; Byrne, Nuala M; Dulloo, Abdul; Hills, Andrew P
2014-01-01
The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the 'efficiency' with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty. © 2014 S. Karger GmbH, Freiburg.
NASA Technical Reports Server (NTRS)
Mcnider, Richard T.; Song, Aaron; Casey, Dan; Crosson, William; Wetzel, Peter
1993-01-01
The current NWS ground based network is not sufficient to capture the dynamic or thermodynamic structure leading to the initiation and organization of air mass moist convective events. Under this investigation we intend to use boundary layer mesoscale models (McNider and Pielke, 1981) to examine the dynamic triggering of convection due to topography and surface thermal contrasts. VAS and MAN's estimates of moisture will be coupled with the dynamic solution to provide an estimate of the total convective potential. Visible GOES images will be used to specify incoming insolation which may lead to surface thermal contrasts and JR skin temperatures will be used to estimate surface moisture (via the surface thermal inertia) (Weizel and Chang, 1988) which can also induce surface thermal contrasts. We will use the SPACE-COHMEX data base to evaluate the ability of the joint mesoscale model satellite products to show skill in predicting the development of air mass convection. We will develop images of model vertical velocity and satellite thermodynamic measures to derive images of predicted convective potential. We will then after suitable geographic registration carry out a pixel by pixel correlation between the model/satellite convective potential and the 'truth' which are the visible images. During the first half of the first year of this investigation we have concentrated on two aspects of the project. The first has been in generating vertical velocity fields from the model for COHMEX case days. We have taken June 19 as the first case and have run the mesoscale model at several different grid resolutions. We are currently developing the composite model/satellite convective image. The second aspect has been the attempted calibration of the surface energy budget to provide the proper horizontal thermal contrasts for convective initiation. We have made extensive progress on this aspect using the FIFE data as a test data set. The calibration technique looks very promising.
Gaussian Decomposition of Laser Altimeter Waveforms
NASA Technical Reports Server (NTRS)
Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan
1999-01-01
We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.
Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man
2015-01-01
Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.
Grant, Julia D; Agrawal, Arpana; Werner, Kimberly B; McCutcheon, Vivia V; Nelson, Elliot C; Madden, Pamela A F; Bucholz, Kathleen K; Heath, Andrew C; Sartor, Carolyn E
2017-10-01
Childhood maltreatment is a known risk factor for cannabis initiation and problem use, but the extent to which this association is attributable to shared familial influences is unknown. We estimate the magnitude of associations between childhood maltreatment, timing of cannabis initiation, and cannabis-related problems, in European-American (EA) and African-American (AA) women, and parse the relative influence of additive genetic (A), shared environmental (C), and individual-specific environmental (E) factors on these constructs and their covariation. Data were from diagnostic telephone interviews conducted with 3786 participants (14.6% AA) in a population-based study of female twins. Logistic regression analyses and twin modeling were used to test for associations, and estimate the relative contributions of genetic and environmental influences to childhood maltreatment and cannabis outcomes and their covariation. Maltreatment was significantly associated with increased likelihood of cannabis initiation before age 15 among EAs (OR=6.33) and AAs (OR=3.93), but with increased likelihood of later initiation among EAs only (OR=1.68). Maltreatment was associated with cannabis problems among both groups (EA OR=2.32; AA OR=2.03). Among EA women, the covariation between maltreatment and cannabis outcomes was primarily attributable to familial environment (rC=0.67-0.70); among AAs, only individual-specific environment contributed (rE=0.37-0.40). Childhood maltreatment is a major contributor to early initiation of cannabis as well as progression to cannabis problems in both AA and EA women. Distinctions by race/ethnicity are not in the relative contribution of genetic factors, but rather in the type of environmental influences that contribute to stages of cannabis involvement. Copyright © 2017 Elsevier B.V. All rights reserved.
Hamilton, Alex; Garcia-Calleja, Jesus M; Vitoria, Marco; Gilks, Charles; Souteyrand, Yves; De Cock, Kevin; Crowley, Siobhan
2010-10-01
The World Health Organization (WHO) published a revision of the antiretroviral therapy (ART) guidelines and now recommends ART for all those with a CD4 cell count ≤350/mm(3), for people with HIV and active tuberculosis (TB) or chronic active hepatitis B irrespective of CD4 cell count and all HIV-positive pregnant women. A study was undertaken to estimate the impact of the new guidelines using four countries as examples. The current WHO/UNAIDS country projections were accessed based on the 2007 estimates for Zambia, Kenya, Cameroon and Vietnam. New projections were created using Spectrum. CD4 progression rates to need for ART were modified and compared with the baseline projections. The pattern of increased need for treatment is similar across the four projections. Initiating treatment at a CD4 count <250/mm(3) will increase the need for treatment by a median of 22% immediately, initiating ART at a CD4 count <350/mm(3) increases the need for treatment by a median of 60%, and the need for treatment doubles if ART is commenced at a CD4 count <500/mm(3). Initiating ART at a CD4 cell count <250/mm(3) would increase the need for treatment by a median of around 15% in 2012; initiating treatment at a CD4 count <350/mm(3) increases the need for treatment by a median of 42% across the same projections and about 84% if CD4 <500/mm(3) was used. The projections indicate that initiating ART earlier in the course of the disease by increasing the threshold for the initiation of ART would increase the numbers of adults in need of treatment immediately and in the future.
Object tracking algorithm based on the color histogram probability distribution
NASA Astrophysics Data System (ADS)
Li, Ning; Lu, Tongwei; Zhang, Yanduo
2018-04-01
In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.
Ice water path estimation and characterization using passive microwave radiometry
NASA Technical Reports Server (NTRS)
Vivekanandan, J.; Turk, J.; Bringi, V. N.
1991-01-01
Model computations of top-of-atmospheric microwave brightness temperatures T(B) from layers of precipitation-sized ice of variable bulk density and ice water content (IWC) are presented. It is shown that the 85-GHz T(B) depends essentially on the ice optical thickness. The results demonstrate the potential usefulness of scattering-based channels for characterizing the ice phase and suggest a top-down methodology for retrieval of cloud vertical structure and precipitation estimation from multifrequency passive microwave measurements. Attention is also given to radiative transfer model results based on the multiparameter radar data initialization from the Cooperative Huntsville Meteorological Experiment (COHMEX) in northern Alabama. It is shown that brightness temperature warming effects due to the inclusion of a cloud liquid water profile are especially significant at 85 GHz during later stages of cloud evolution.
3D Indoor Positioning of UAVs with Spread Spectrum Ultrasound and Time-of-Flight Cameras
Aguilera, Teodoro
2017-01-01
This work proposes the use of a hybrid acoustic and optical indoor positioning system for the accurate 3D positioning of Unmanned Aerial Vehicles (UAVs). The acoustic module of this system is based on a Time-Code Division Multiple Access (T-CDMA) scheme, where the sequential emission of five spread spectrum ultrasonic codes is performed to compute the horizontal vehicle position following a 2D multilateration procedure. The optical module is based on a Time-Of-Flight (TOF) camera that provides an initial estimation for the vehicle height. A recursive algorithm programmed on an external computer is then proposed to refine the estimated position. Experimental results show that the proposed system can increase the accuracy of a solely acoustic system by 70–80% in terms of positioning mean square error. PMID:29301211
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Urethritis in men: benefits, risks, and costs of alternative strategies of management.
Braun, P; Sherman, H; Komaroff, A L
1982-01-01
Four alternative strategies for the management of men with acute urethritis were analyzed: treating patients with tetracycline, with or without a urethral culture, without basing the initial treatment decision on the results of a gram-stained smear; treating patients with penicillin, without basing initial treatment on the results of a gram-stained smear; basing initial treatment with tetracycline or penicillin on the results of a gram-stained smear; and basing treatment on the results of both a gram-stained smear and a culture. The tetracycline strategy resulted in fewer days of morbidity, a lower probability of premature death, lower dollar costs, and a much lower rate of uncured nongonococcal urethritis, but in slightly higher rates of uncured gonorrhea and syphilis than more traditional strategies. Use of culture with the tetracycline strategy (1A) permitted tracing of gonorrhea contacts, achieved the same low morbidity, and added little cost. The conclusions were true regardless of the probability of gonorrhea and for reasonable estimates of probable compliance with oral medication regimens. Test-of-cure cultures for patients who were asymptomatic after treatment for gonorrhea required the expenditure of from $4,900 to $109,800 for each case of asymptomatic persistent gonorrhea discovered and cured, depending on the strategy used.
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
Using VS30 to Estimate Station ML Adjustments (dML)
NASA Astrophysics Data System (ADS)
Yong, A.; Herrick, J.; Cochran, E. S.; Andrews, J. R.; Yu, E.
2017-12-01
Currently, new seismic stations added to a regional seismic network cannot be used to calculate local or Richter magnitude (ML) until a revised region-wide amplitude decay function is developed. The new station must record a minimum number of local and regional events that meet specific amplitude requirements prior to re-calibration of the amplitude decay function. Therefore, there can be significant delay between when a new station starts contributing real-time waveform packets and when the data can be included in magnitude estimation. The station component adjustments (dML; Uhrhammer et al., 2011) are calculated after first inverting for a new regional amplitude decay function, constrained by the sum of dML for long-running stations. Here, we propose a method to calculate an initial dML using known or proxy values of seismic site conditions. For site conditions, we use the time-averaged shear-wave velocity (VS) of the upper 30 m (VS30). We solve for dML as described in Equation (1) by Uhrhammer et al. (2011): ML = log (A) - log A0 (r) + dML, where A is the maximum Wood and Anderson (1925) trace amplitude (mm), r is the distance (km), and dML is the station adjustment. Measured VS30 and estimated dML data are comprised of records from 887 horizontal components (east-west and north-south orientations) from 93 seismic monitoring stations in the California Integrated Seismic Network. VS30 values range from 202 m/s to 1464 m/s and dML range from -1.10 to 0.39. VS30 and dML exhibit a positive correlation coefficient (R = 0.72), indicating that as VS30 increases, dML increases. This implies that greater site amplification (i.e., lower VS30) results in smaller ML. When we restrict VS30 < 760 m/s to focus on dML at soft soil to soft rock sites, R increases to 0.80. In locations where measured VS30 data are unavailable, we evaluate the use of proxy-based VS30 estimates based on geology, topographic slope and terrain classification, as well as other hybridized methods. Measured VS30 data or proxy-based VS30 estimates can be used for initial dML estimates that allow new stations to contribute to regional network ML estimates immediately without the need to wait until a minimum set of earthquake data has been recorded.
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components
NASA Technical Reports Server (NTRS)
Reyes, A. L.; Campbell, J. E.
1976-01-01
The heat resistance of Bacillus subtilis var. niger was measured from 85 to 125 C using moisture levels of % RH or = 0.001 to 100. Curves are presented which characterize thermal destruction using thermal death times defined as F values at a given combination of three moisture and temperature conditions. The times required at 100 C for reductions of 99.99% of the initial population were estimated for the three moisture conditions. The linear model (from which estimates of D are obtained) was satisfactory for estimating thermal death times (% RH or = 0.07) in the plate count range. Estimates based on observed thermal death times and D values for % RH = 100 diverged so that D values generally gave a more conservative estimate over the temperature range 90 to 125 C. Estimates of Z sub F and Z sub L ranged from 32.1 to 58.3 C for % RH of or = 0.07 and 100. A Z sub D = 30.0 was obtained for data observed at % RH or = 0.07.