An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases
ERIC Educational Resources Information Center
Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.
2010-01-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…
A game theory approach to target tracking in sensor networks.
Gu, Dongbing
2011-02-01
In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.
The role of experience in location estimation: Target distributions shift location memory biases.
Lipinski, John; Simmering, Vanessa R; Johnson, Jeffrey S; Spencer, John P
2010-04-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J. P., & Hund, A. M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.
Information Weighted Consensus for Distributed Estimation in Vision Networks
ERIC Educational Resources Information Center
Kamal, Ahmed Tashrif
2013-01-01
Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…
Maximum angular accuracy of pulsed laser radar in photocounting limit.
Elbaum, M; Diament, P; King, M; Edelson, W
1977-07-01
To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.
Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan
2013-06-07
Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
Stochastic inversion of cross-borehole radar data from metalliferous vein detection
NASA Astrophysics Data System (ADS)
Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui
2017-12-01
In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-09-09
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Huijun; Gordon, J. James; Siebers, Jeffrey V.
2011-02-15
Purpose: A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D{sub v} exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structuresmore » meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Methods: Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals {omega} (e.g., {omega}=1 deg., 2 deg., 5 deg., 10 deg., 20 deg.). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment {omega}{sub eff}. In each direction, the DM was calculated by moving the structure in radial steps of size {delta}(=0.1,0.2,0.5,1 mm) until the specified isodose was crossed. Coverage estimation accuracy {Delta}Q was quantified as a function of the sampling parameters {omega} or {omega}{sub eff} and {delta}. Results: The accuracy of coverage estimates depends on angular and radial DMD sampling parameters {omega} or {omega}{sub eff} and {delta}, as well as the employed sampling technique. Target |{Delta}Q|<1% and OAR |{Delta}Q|<3% can be achieved with sampling parameters {omega} or {omega}{sub eff}=20 deg., {delta}=1 mm. Better accuracy (target |{Delta}Q|<0.5% and OAR |{Delta}Q|<{approx}1%) can be achieved with {omega} or {omega}{sub eff}=10 deg., {delta}=0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Conclusions: Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with {omega} or {omega}{sub eff}=10 deg. and {delta}=0.5 mm should be adequate for planning purposes.« less
Xu, Huijun; Gordon, J James; Siebers, Jeffrey V
2011-02-01
A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The accuracy of coverage estimates depends on angular and radial DMD sampling parameters omega or omega eff and delta, as well as the employed sampling technique. Target deltaQ/ < l% and OAR /deltaQ/ < 3% can be achieved with sampling parameters omega or omega eef = 20 degrees, delta =1 mm. Better accuracy (target /deltaQ < 0.5% and OAR /deltaQ < approximately 1%) can be achieved with omega or omega eff = 10 degrees, delta = 0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with omega or omega eff = 10 degrees and delta = 0.5 mm should be adequate for planning purposes.
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Kenny S K; Lee, Louis K Y; Xing, L
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less
Matching a Distribution by Matching Quantiles Estimation
Sgouropoulos, Nikolaos; Yao, Qiwei; Yastremiz, Claudia
2015-01-01
Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least-squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real dataset is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO. PMID:26692592
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-01-01
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058
Measuring populations to improve vaccination coverage
NASA Astrophysics Data System (ADS)
Bharti, Nita; Djibo, Ali; Tatem, Andrew J.; Grenfell, Bryan T.; Ferrari, Matthew J.
2016-10-01
In low-income settings, vaccination campaigns supplement routine immunization but often fail to achieve coverage goals due to uncertainty about target population size and distribution. Accurate, updated estimates of target populations are rare but critical; short-term fluctuations can greatly impact population size and susceptibility. We use satellite imagery to quantify population fluctuations and the coverage achieved by a measles outbreak response vaccination campaign in urban Niger and compare campaign estimates to measurements from a post-campaign survey. Vaccine coverage was overestimated because the campaign underestimated resident numbers and seasonal migration further increased the target population. We combine satellite-derived measurements of fluctuations in population distribution with high-resolution measles case reports to develop a dynamic model that illustrates the potential improvement in vaccination campaign coverage if planners account for predictable population fluctuations. Satellite imagery can improve retrospective estimates of vaccination campaign impact and future campaign planning by synchronizing interventions with predictable population fluxes.
Measuring populations to improve vaccination coverage
Bharti, Nita; Djibo, Ali; Tatem, Andrew J.; Grenfell, Bryan T.; Ferrari, Matthew J.
2016-01-01
In low-income settings, vaccination campaigns supplement routine immunization but often fail to achieve coverage goals due to uncertainty about target population size and distribution. Accurate, updated estimates of target populations are rare but critical; short-term fluctuations can greatly impact population size and susceptibility. We use satellite imagery to quantify population fluctuations and the coverage achieved by a measles outbreak response vaccination campaign in urban Niger and compare campaign estimates to measurements from a post-campaign survey. Vaccine coverage was overestimated because the campaign underestimated resident numbers and seasonal migration further increased the target population. We combine satellite-derived measurements of fluctuations in population distribution with high-resolution measles case reports to develop a dynamic model that illustrates the potential improvement in vaccination campaign coverage if planners account for predictable population fluctuations. Satellite imagery can improve retrospective estimates of vaccination campaign impact and future campaign planning by synchronizing interventions with predictable population fluxes. PMID:27703191
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Fujikawa, Hiroshi
2017-01-01
Microbial concentration in samples of a food product lot has been generally assumed to follow the log-normal distribution in food sampling, but this distribution cannot accommodate the concentration of zero. In the present study, first, a probabilistic study with the most probable number (MPN) technique was done for a target microbe present at a low (or zero) concentration in food products. Namely, based on the number of target pathogen-positive samples in the total samples of a product found by a qualitative, microbiological examination, the concentration of the pathogen in the product was estimated by means of the MPN technique. The effects of the sample size and the total sample number of a product were then examined. Second, operating characteristic (OC) curves for the concentration of a target microbe in a product lot were generated on the assumption that the concentration of a target microbe could be expressed with the Poisson distribution. OC curves for Salmonella and Cronobacter sakazakii in powdered formulae for infants and young children were successfully generated. The present study suggested that the MPN technique and the Poisson distribution would be useful for qualitative microbiological test data analysis for a target microbe whose concentration in a lot is expected to be low.
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
About an adaptively weighted Kaplan-Meier estimate.
Plante, Jean-François
2009-09-01
The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja
2016-11-01
Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.
Gao, Nuo; Zhu, S A; He, Bin
2005-06-07
We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.
The use of groundwater age as a calibration target
Konikow, Leonard F.; Hornberger, G.Z.; Putnam, L.D.; Shapiro, A.M.; Zinn, B.A.
2008-01-01
Groundwater age (or residence time), as estimated on the basis of concentrations of one or more environmental tracers, can provide a useful and independent calibration target for groundwater models. However, concentrations of environmental tracers are affected by the complexities and mixing inherent in groundwater flow through heterogeneous media, especially in the presence of pumping wells. An analysis of flow and age distribution in the Madison aquifer in South Dakota, USA, illustrates the additional benefits and difficulties of using age as a calibration target. Alternative numerical approaches to estimating travel time and age with backward particle tracking are assessed, and the resulting estimates are used to refine estimates of effective porosity and to help assess the adequacy and credibility of the flow model.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Distributed estimation for adaptive sensor selection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Hassan Hamid, Matasm M.
2014-05-01
Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
External calibration of polarimetric radar images using distributed targets
NASA Technical Reports Server (NTRS)
Yueh, Simon H.; Nghiem, S. V.; Kwok, R.
1992-01-01
A new technique is presented for calibrating polarimetric synthetic aperture radar (SAR) images using only the responses from natural distributed targets. The model for polarimetric radars is assumed to be X = cRST where X is the measured scattering matrix corresponding to the target scattering matrix S distorted by the system matrices T and R (in general T does not equal R(sup t)). To allow for the polarimetric calibration using only distributed targets and corner reflectors, van Zyl assumed a reciprocal polarimetric radar model with T = R(sup t); when applied for JPL SAR data, a heuristic symmetrization procedure is used by POLCAL to compensate the phase difference between the measured HV and VH responses and then take the average of both. This heuristic approach causes some non-removable cross-polarization responses for corner reflectors, which can be avoided by a rigorous symmetrization method based on reciprocity. After the radar is made reciprocal, a new algorithm based on the responses from distributed targets with reflection symmetry is developed to estimate the cross-talk parameters. The new algorithm never experiences problems in convergence and is also found to converge faster than the existing routines implemented for POLCAL. When the new technique is implemented for the JPL polarimetric data, symmetrization and cross-talk removal are performed on a line-by-line (azimuth) basis. After the cross-talks are removed from the entire image, phase and amplitude calibrations are carried out by selecting distributed targets either with azimuthal symmetry along the looking direction or with some well-known volume and surface scattering mechanisms to estimate the relative phases and amplitude responses of the horizontal and vertical channels.
Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro; Urakabe, Eriko; Sato, Shinji; Kanazawa, Mitsutaka; Kanai, Tatsuaki
2006-09-07
In radiation therapy with highly energetic heavy ions, the conformal irradiation of a tumour can be achieved by using their advantageous features such as the good dose localization and the high relative biological effectiveness around their mean range. For effective utilization of such properties, it is necessary to evaluate the range of incident ions and the deposited dose distribution in a patient's body. Several methods have been proposed to derive such physical quantities; one of them uses positron emitters generated through projectile fragmentation reactions of incident ions with target nuclei. We have proposed the application of the maximum likelihood estimation (MLE) method to a detected annihilation gamma-ray distribution for determination of the range of incident ions in a target and we have demonstrated the effectiveness of the method with computer simulations. In this paper, a water, a polyethylene and a polymethyl methacrylate target were each irradiated with stable (12)C, (14)N, (16)O and (20)Ne beams. Except for a few combinations of incident beams and targets, the MLE method could determine the range of incident ions R(MLE) with a difference between R(MLE) and the experimental range of less than 2.0 mm under the circumstance that the measurement of annihilation gamma rays was started just after the irradiation of 61.4 s and lasted for 500 s. In the process of evaluating the range of incident ions with the MLE method, we must calculate many physical quantities such as the fluence and the energy of both primary ions and fragments as a function of depth in a target. Consequently, by using them we can obtain the dose distribution. Thus, when the mean range of incident ions is determined with the MLE method, the annihilation gamma-ray distribution and the deposited dose distribution can be derived simultaneously. The derived dose distributions in water for the mono-energetic heavy-ion beams of four species were compared with those measured with an ionization chamber. The good agreement between the derived and the measured distributions implies that the deposited dose distribution in a target can be estimated from the detected annihilation gamma-ray distribution with a positron camera.
Decentralized cooperative TOA/AOA target tracking for hierarchical wireless sensor networks.
Chen, Ying-Chih; Wen, Chih-Yu
2012-11-08
This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processing is conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for the localization task. The proposed energy-efficient tracking algorithm allows each sub-cluster member to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for object position estimation.
Evaluation of the Performance of the Distributed Phased-MIMO Sonar.
Pan, Xiang; Jiang, Jingning; Wang, Nan
2017-01-11
A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments.
Evaluation of the Performance of the Distributed Phased-MIMO Sonar
Pan, Xiang; Jiang, Jingning; Wang, Nan
2017-01-01
A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments. PMID:28085071
A distributed transmit beamforming synchronization strategy for multi-element radar systems
NASA Astrophysics Data System (ADS)
Xiao, Manlin; Li, Xingwen; Xu, Jikang
2017-02-01
The distributed transmit beamforming has recently been discussed as an energy-effective technique in wireless communication systems. A common ground of various techniques is that the destination node transmits a beacon signal or feedback to assist source nodes to synchronize signals. However, this approach is not appropriate for a radar system since the destination is a non-cooperative target of an unknown location. In our paper, we propose a novel synchronization strategy for a distributed multiple-element beamfoming radar system. Source nodes estimate parameters of beacon signals transmitted from others to get their local synchronization information. The channel information of the phase propagation delay is transmitted to nodes via the reflected beacon signals as well. Next, each node generates appropriate parameters to form a beamforming signal at the target. Transmit beamforming signals of all nodes will combine coherently at the target compensating for different propagation delay. We analyse the influence of the local oscillation accuracy and the parameter estimation errors on the performance of the proposed synchronization scheme. The results of numerical simulations illustrate that this synchronization scheme is effective to enable the transmit beamforming in a distributed multi-element radar system.
Improved atmospheric effect elimination method for the roughness estimation of painted surfaces.
Zhang, Ying; Xuan, Jiabin; Zhao, Huijie; Song, Ping; Zhang, Yi; Xu, Wujian
2018-03-01
We propose a method for eliminating the atmospheric effect in polarimetric imaging remote sensing by using polarimetric imagers to simultaneously detect ground targets and skylight, which does not need calibrated targets. In addition, calculation efficiencies are improved by the skylight division method without losing estimation accuracy. Outdoor experiments are performed to obtain the polarimetric bidirectional reflectance distribution functions of painted surfaces and skylight under different weather conditions. Finally, the roughness of the painted surfaces is estimated. We find that the estimation accuracy with the proposed method is 6% on cloudy weather, while it is 30.72% without atmospheric effect elimination.
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang
2017-11-01
The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.
A distributed automatic target recognition system using multiple low resolution sensors
NASA Astrophysics Data System (ADS)
Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj
2008-04-01
In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.
Time-frequency analysis of backscattered signals from diffuse radar targets
NASA Astrophysics Data System (ADS)
Kenny, O. P.; Boashash, B.
1993-06-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.
NASA Astrophysics Data System (ADS)
Kusaka, Takashi; Miyazaki, Go
2014-10-01
When monitoring target areas covered with vegetation from a satellite, it is very useful to estimate the vegetation index using the surface anisotropic reflectance, which is dependent on both solar and viewing geometries, from satellite data. In this study, the algorithm for estimating optical properties of atmospheric aerosols such as the optical thickness (τ), the refractive index (Nr), the mixing ratio of small particles in the bimodal log-normal distribution function (C) and the bidirectional reflectance (R) from only the radiance and polarization at the 865nm channel received by the PARASOL/POLDER is described. Parameters of the bimodal log-normal distribution function: mean radius, r1, standard deviation, σ1, of fine aerosols, and r2, σ2 of coarse aerosols were fixed, and these values were estimated from monthly averaged size distribution at AERONET sites managed by NASA near the target area. Moreover, it is assumed that the contribution of the surface reflectance with directional anisotropy to the polarized radiance received by the satellite is small because it is shown from our ground-based polarization measurements of light ray reflected by the grassland that degrees of polarization of the reflected light by the grassland are very low values at the 865nm channel. First aerosol properties were estimated from only the polarized radiance and then the bidirectional reflectance given by the Ross-Li BRDF model was estimated from only the total radiance at target areas in PARASOL/POLDER data over the Japanese islands taken on April 28, 2012 and April 25, 2010. The estimated optical thickness of aerosols was checked with those given in AERONET sites and the estimated parameters of BRDF were compared with those of vegetation measured from the radio-controlled helicopter. Consequently, it is shown that the algorithm described in the present study provides reasonable values for aerosol properties and surface bidirectional reflectance.
Adaptive Metropolis Sampling with Product Distributions
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
NASA Astrophysics Data System (ADS)
Dai, Jun; Zhou, Haigang; Zhao, Shaoquan
2017-01-01
This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.
Research on Radar Micro-Doppler Feature Parameter Estimation of Propeller Aircraft
NASA Astrophysics Data System (ADS)
He, Zhihua; Tao, Feixiang; Duan, Jia; Luo, Jingsheng
2018-01-01
The micro-motion modulation effect of the rotated propellers to radar echo can be a steady feature for aircraft target recognition. Thus, micro-Doppler feature parameter estimation is a key to accurate target recognition. In this paper, the radar echo of rotated propellers is modelled and simulated. Based on which, the distribution characteristics of the micro-motion modulation energy in time, frequency and time-frequency domain are analyzed. The micro-motion modulation energy produced by the scattering points of rotating propellers is accumulated using the Inverse-Radon (I-Radon) transform, which can be used to accomplish the estimation of micro-modulation parameter. Finally, it is proved that the proposed parameter estimation method is effective with measured data. The micro-motion parameters of aircraft can be used as the features of radar target recognition.
Object tracking algorithm based on the color histogram probability distribution
NASA Astrophysics Data System (ADS)
Li, Ning; Lu, Tongwei; Zhang, Yanduo
2018-04-01
In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, J; Fan, J; Hu, W
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less
A fusion approach for coarse-to-fine target recognition
NASA Astrophysics Data System (ADS)
Folkesson, Martin; Grönwall, Christina; Jungert, Erland
2006-04-01
A fusion approach in a query based information system is presented. The system is designed for querying multimedia data bases, and here applied to target recognition using heterogeneous data sources. The recognition process is coarse-to-fine, with an initial attribute estimation step and a following matching step. Several sensor types and algorithms are involved in each of these two steps. An independence of the matching results, on the origin of the estimation results, is observed. It allows for distribution of data between algorithms in an intermediate fusion step, without risk of data incest. This increases the overall chance of recognising the target. An implementation of the system is described.
Jatautis, Šarūnas; Jankauskas, Rimantas
2018-02-01
Objectives. The present study addresses the following two main questions: a) Is the pattern of skeletal ageing observed in well-known western European reference collections applicable to modern eastern Baltic populations, or are population-specific standards needed? b) What are the consequences for estimating the age-at-death distribution in the target population when differences in the estimates from reference data are not taken into account? Materials and methods. The dataset consists of a modern Lithuanian osteological reference collection, which is the only collection of this type in the eastern Baltic countries (n = 381); and two major western European reference collections, Coimbra (n = 264) and Spitalfields (n = 239). The age-related changes were evaluated using the scoring systems of Suchey-Brooks (Brooks & Suchey 1990) and Lovejoy et al. (1985), and were modelled via regression models for multinomial responses. A controlled experiment based on simulations and the Rostock Manifesto estimation protocol (Wood et al. 2002) was then carried out to assess the effect of using estimates from different reference samples and different regression models on estimates of the age-at-death distribution in the hypothetical target population. Results. The following key results were obtained in this study. a) The morphological alterations in the pubic symphysis were much faster among women than among men at comparable ages in all three reference samples. In contrast, we found no strong evidence in any of the reference samples that sex is an important factor to explain rate of changes in the auricular surface. b) The rate of ageing in the pubic symphysis seems to be similar across the three reference samples, but there is little evidence of a similar pattern in the auricular surface. That is, the estimated rate of age-related changes in the auricular surface was much faster in the LORC and the Coimbra samples than in the Spitalfields sample. c) The results of simulations showed that the differences in the estimates from the reference data result in noticeably different age-at-death distributions in the target population. Thus, a degree bias may be expected if estimates from the western European reference data are used to collect information on ages at death in the eastern Baltic region based on the changes in the auricular surface. d) Moreover, the bias is expected to be more pronounced if the fitted regression model improperly describes the reference data. Conclusions. Differences in the timing of age-related changes in skeletal traits are to be expected among European reference samples, and cannot be ignored when seeking to reliably estimate an age-at-death distribution in the target population. This form of bias should be taken into consideration in further studies of skeletal samples from the eastern Baltic region.
Probability of success for phase III after exploratory biomarker analysis in phase II.
Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver
2017-05-01
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.
2016-10-01
DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited The views, opinions and/or findings contained in this report are those of the...documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...5012 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13
Setting population targets for mammals using body mass as a predictor of population persistence.
Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J
2017-04-01
Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Zahari, Zakirah Mohd; Zubaidah Adnan, Siti; Kanthasamy, Ramesh; Saleh, Suriyati; Samad, Noor Asma Fazli Abdul
2018-03-01
The specification of the crystal product is usually given in terms of crystal size distribution (CSD). To this end, optimal cooling strategy is necessary to achieve the CSD. The direct design control involving analytical CSD estimator is one of the approaches that can be used to generate the set-point. However, the effects of temperature on the crystal growth rate are neglected in the estimator. Thus, the temperature dependence on the crystal growth rate needs to be considered in order to provide an accurate set-point. The objective of this work is to extend the analytical CSD estimator where Arrhenius expression is employed to cover the effects of temperature on the growth rate. The application of this work is demonstrated through a potassium sulphate crystallisation process. Based on specified target CSD, the extended estimator is capable of generating the required set-point where a proposed controller successfully maintained the operation at the set-point to achieve the target CSD. Comparison with other cooling strategies shows a reduction up to 18.2% of the total number of undesirable crystals generated from secondary nucleation using linear cooling strategy is achieved.
Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew
2012-01-01
The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Estimating changes in urban land and urban population using refined areal interpolation techniques
NASA Astrophysics Data System (ADS)
Zoraghein, Hamidreza; Leyk, Stefan
2018-05-01
The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead, it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.
NASA Astrophysics Data System (ADS)
Zoraghein, H.; Leyk, S.; Balk, D.
2017-12-01
The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. The U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.
Proof of concept and dose estimation with binary responses under model uncertainty.
Klingenberg, B
2009-01-30
This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.
General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1997-04-01
To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.
Sato, Tatsuhiko; Watanabe, Ritsuko; Niita, Koji
2006-01-01
Estimation of the specific energy distribution in a human body exposed to complex radiation fields is of great importance in the planning of long-term space missions and heavy ion cancer therapies. With the aim of developing a tool for this estimation, the specific energy distributions in liquid water around the tracks of several HZE particles with energies up to 100 GeV n(-1) were calculated by performing track structure simulation with the Monte Carlo technique. In the simulation, the targets were assumed to be spherical sites with diameters from 1 nm to 1 microm. An analytical function to reproduce the simulation results was developed in order to predict the distributions of all kinds of heavy ions over a wide energy range. The incorporation of this function into the Particle and Heavy Ion Transport code System (PHITS) enables us to calculate the specific energy distributions in complex radiation fields in a short computational time.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
Esque, Todd C.; Inman, Rich; Nussear, Kenneth E.; Webb, Robert; Girard, M.M.; DeGayner, J.
2016-01-01
The distribution and abundance of human-caused disturbances vary greatly through space and time and are cause for concern among land stewards in natural areas of the southwestern border-lands between the USA and Mexico. Human migration and border protection along the international boundary create Unauthorized Trail and Road (UTR) networks across National Park Service lands and other natural areas. UTRs may cause soil erosion and compaction, damage to vegetation and cultural resources, and may stress wildlife or impede their movements. We quantify the density and severity of UTR disturbances in relation to soils, and compare the use of previously established targeted trail assessments (hereafter — targeted assessments) against randomly placed transects to detect trail densities at Coronado National Memorial in Arizona in 2011. While trail distributions were similar between methods, targeted assessments estimated a large portion of the park to have the lowest density category (0–5 trail encounters per/km2), whereas the random transects in 2011 estimated more of the park as having the higher density categories (e.g., 15–20 encounters per km2category). Soil vulnerability categories that were assigned, a priori, based on published soil texture and composition did not accurately predict the impact of UTRs on soil, indicating that empirical methods may be better suited for identifying severity of compaction. While the estimates of UTR encounter frequencies were greater using the random transects than the targeted assessments for a relatively short period of time, it is difficult to determine whether this difference is dependent on greater cross-border activity, differences in technique, or from confounding environmental factors. Future surveys using standardized sampling techniques would increase accuracy.
NASA Astrophysics Data System (ADS)
Weidner, E. F.; Mayer, L. A.; Weber, T. C.; Jerram, K.; Jakobsson, M.; Chernykh, D.; Ananiev, R.; Mohammad, R.; Semiletov, I. P.
2016-12-01
On the Eastern Siberian Arctic Shelf (ESAS) subsea permafrost, shallow gas hydrates, and trapped free gas hold an estimated 1400 Gt of methane. Recent observations of methane bubble plumes and high concentrations of dissolved methane in the water column indicate methane release via ebullition. Methane gas released from the shallow ESAS (<50 m average depth) has high potential to be transported to the atmosphere. To directly and quantitatively address the magnitude of methane flux and the fate of rising bubbles in the ESAS, methane seeps were mapped with a broadband split-beam echosounder as part of the Swedish-Russian-US Arctic Ocean Investigation of Climate-Cryosphere-Carbon Interactions program (SWERUS-C3). Acoustic measurements were made over a broad range of frequencies (16 to 29 kHz). The broad bandwidth provided excellent discrimination of individual targets in the water column, allowing for the identification of single bubbles. Absolute bubble target strength values were determined by compensating apparent target strength measurements for beam pattern effects via standard calibration techniques. The bubble size distribution of seeps with individual bubble signatures was determined by exploiting bubble target strength models over the broad range of frequencies. For denser seeps, with potential higher methane flux, bubble size distribution was determined via extrapolation from seeps in similar geomorphological settings. By coupling bubble size distributions with rise velocity measurements, which are made possible by split-beam target tracking, methane gas flux can be estimated. Of the 56 identified seeps in the SWERUS data set, individual bubbles scatterers were identified in more than half (31) of the seeps. Preliminary bubble size distribution results indicate bubble radii range from 0.75 to 3.0 mm, with relatively constant bubble size distribution throughout the water column. Initial rise velocity observations indicate bubble rise velocity increases with decreasing depth, seemingly independent of bubble radius.
Consensus-based distributed estimation in multi-agent systems with time delay
NASA Astrophysics Data System (ADS)
Abdelmawgoud, Ahmed
During the last years, research in the field of cooperative control of swarm of robots, especially Unmanned Aerial Vehicles (UAV); have been improved due to the increase of UAV applications. The ability to track targets using UAVs has a wide range of applications not only civilian but also military as well. For civilian applications, UAVs can perform tasks including, but not limited to: map an unknown area, weather forecasting, land survey, and search and rescue missions. On the other hand, for military personnel, UAV can track and locate a variety of objects, including the movement of enemy vehicles. Consensus problems arise in a number of applications including coordination of UAVs, information processing in wireless sensor networks, and distributed multi-agent optimization. We consider a widely studied consensus algorithms for processing sensed data by different sensors in wireless sensor networks of dynamic agents. Every agent involved in the network forms a weighted average of its own estimated value of some state with the values received from its neighboring agents. We introduced a novelty of consensus-based distributed estimation algorithms. We propose a new algorithm to reach a consensus given time delay constraints. The proposed algorithm performance was observed in a scenario where a swarm of UAVs measuring the location of a ground maneuvering target. We assume that each UAV computes its state prediction and shares it with its neighbors only. However, the shared information applied to different agents with variant time delays. The entire group of UAVs must reach a consensus on target state. Different scenarios were also simulated to examine the effectiveness and performance in terms of overall estimation error, disagreement between delayed and non-delayed agents, and time to reach a consensus for each parameter contributing on the proposed algorithm.
Extraction of the pretzelosity distribution from experimental data
Lefky, Christopher; Prokudin, Alexei
2015-02-13
We attempt an extraction of the pretzelosity distribution (more » $$h^{\\perp}_{1T}$$) from preliminary COMPASS, HERMES, and JLAB experimental data on $$\\sin(3\\phi_h - \\phi_S)$$ asymmetry on proton and deuteron targets. The resulting distributions, albeit big errors, show tendency for up quark pretzelosity to be positive and down quark pretzelosity to be negative. A model relation of pretzelosity distribution and Orbital Angular Momentum of quarks is used to estimate contributions of up and down quarks.« less
New estimation architecture for multisensor data fusion
NASA Astrophysics Data System (ADS)
Covino, Joseph M.; Griffiths, Barry E.
1991-07-01
This paper describes a novel method of hierarchical asynchronous distributed filtering called the Net Information Approach (NIA). The NIA is a Kalman-filter-based estimation scheme for spatially distributed sensors which must retain their local optimality yet require a nearly optimal global estimate. The key idea of the NIA is that each local sensor-dedicated filter tells the global filter 'what I've learned since the last local-to-global transmission,' whereas in other estimation architectures the local-to-global transmission consists of 'what I think now.' An algorithm based on this idea has been demonstrated on a small-scale target-tracking problem with many encouraging results. Feasibility of this approach was demonstrated by comparing NIA performance to an optimal centralized Kalman filter (lower bound) via Monte Carlo simulations.
Treating ALS by Targeting Pathological TDP-43
2017-09-01
RECIPIENT: Seattle Institute for Biomedical and Clinical Research Seattle, WA 98108 REPORT DATE: September 2017 TYPE OF REPORT: Annual PREPARED FOR...U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for public release; distribution...0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing
Electron emission produced by photointeractions in a slab target
NASA Technical Reports Server (NTRS)
Thinger, B. E.; Dayton, J. A., Jr.
1973-01-01
The current density and energy spectrum of escaping electrons generated in a uniform plane slab target which is being irradiated by the gamma flux field of a nuclear reactor are calculated by using experimental gamma energy transfer coefficients, electron range and energy relations, and escape probability computations. The probability of escape and the average path length of escaping electrons are derived for an isotropic distribution of monoenergetic photons. The method of estimating the flux and energy distribution of electrons emerging from the surface is outlined, and a sample calculation is made for a 0.33-cm-thick tungsten target located next to the core of a nuclear reactor. The results are to be used as a guide in electron beam synthesis of reactor experiments.
Optical model calculations of heavy-ion target fragmentation
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Norbury, J. W.
1986-01-01
The fragmentation of target nuclei by relativistic protons and heavy ions is described within the context of a simple abrasion-ablation-final-state interaction model. Abrasion is described by a quantum mechanical formalism utilizing an optical model potential approximation. Nuclear charge distributions of the excited prefragments are calculated by both a hypergeometric distribution and a method based upon the zero-point oscillations of the giant dipole resonance. Excitation energies are estimated from the excess surface energy resulting from the abrasion process and the additional energy deposited by frictional spectator interactions of the abraded nucleons. The ablation probabilities are obtained from the EVA-3 computer program. Isotope production cross sections for the spallation of copper targets by relativistic protons and for the fragmenting of carbon targets by relativistic carbon, neon, and iron projectiles are calculated and compared with available experimental data.
EPA Communications Stylebook: Basic Checklist for Product Development
Determine your top messages, check if a similar EPA product already exists, identify your target audience, develop a distribution plan, estimate cost, submit for PROTRAC review, use the Stylebook, get a publication number, final review, publish.
Estimation of fish biomass using environmental DNA.
Takahara, Teruhiko; Minamoto, Toshifumi; Yamanaka, Hiroki; Doi, Hideyuki; Kawabata, Zen'ichiro
2012-01-01
Environmental DNA (eDNA) from aquatic vertebrates has recently been used to estimate the presence of a species. We hypothesized that fish release DNA into the water at a rate commensurate with their biomass. Thus, the concentration of eDNA of a target species may be used to estimate the species biomass. We developed an eDNA method to estimate the biomass of common carp (Cyprinus carpio L.) using laboratory and field experiments. In the aquarium, the concentration of eDNA changed initially, but reached an equilibrium after 6 days. Temperature had no effect on eDNA concentrations in aquaria. The concentration of eDNA was positively correlated with carp biomass in both aquaria and experimental ponds. We used this method to estimate the biomass and distribution of carp in a natural freshwater lagoon. We demonstrated that the distribution of carp eDNA concentration was explained by water temperature. Our results suggest that biomass data estimated from eDNA concentration reflects the potential distribution of common carp in the natural environment. Measuring eDNA concentration offers a non-invasive, simple, and rapid method for estimating biomass. This method could inform management plans for the conservation of ecosystems.
Estimation of Fish Biomass Using Environmental DNA
Takahara, Teruhiko; Minamoto, Toshifumi; Yamanaka, Hiroki; Doi, Hideyuki; Kawabata, Zen'ichiro
2012-01-01
Environmental DNA (eDNA) from aquatic vertebrates has recently been used to estimate the presence of a species. We hypothesized that fish release DNA into the water at a rate commensurate with their biomass. Thus, the concentration of eDNA of a target species may be used to estimate the species biomass. We developed an eDNA method to estimate the biomass of common carp (Cyprinus carpio L.) using laboratory and field experiments. In the aquarium, the concentration of eDNA changed initially, but reached an equilibrium after 6 days. Temperature had no effect on eDNA concentrations in aquaria. The concentration of eDNA was positively correlated with carp biomass in both aquaria and experimental ponds. We used this method to estimate the biomass and distribution of carp in a natural freshwater lagoon. We demonstrated that the distribution of carp eDNA concentration was explained by water temperature. Our results suggest that biomass data estimated from eDNA concentration reflects the potential distribution of common carp in the natural environment. Measuring eDNA concentration offers a non-invasive, simple, and rapid method for estimating biomass. This method could inform management plans for the conservation of ecosystems. PMID:22563411
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
2014-10-01
estimated total cord, spared white matter, and lesion volumes were determined. Volumetric analysis for the axial distribution of the lesion and spared...We analyzed the axial distribution of the lesion along a 3 mm segment with epicenter in the middle. To account for spinal cord size variability...that drug treated mice had overall smaller lesions as compared to the vehicle treated group. We next analyzed the axial distribution of spared white
Widths of transverse momentum distributions in intermediate-energy heavy-ion collisions.
Khan, F; Townsend, L W
1993-08-01
The need to include dynamical collision momentum transfer contributions, arising from interacting nuclear and Coulomb fields, to estimates of fragment momentum distributions is discussed. Methods based upon an optical potential model are presented. Comparisons with recent experimental data of the Siegen group for variances of transverse momentum distributions for gold nuclei at 980 A MeV fragmenting on silver foil and plastic nuclear track detector targets are made. The agreement between theory and experiment is good.
2015-09-30
experiment was conducted in Broad Sound of Massachusetts Bay using the AUV Unicorn, a 147dB omnidirectional Lubell source, and an open-ended steel pipe... steel pipe target (Figure C) was dropped at an approximate local coordinate position of (x,y)=(170,155). The location was estimated using ship...position when the target was dropped, but was only accurate within 10-15m. The orientation of the target was unknown. Figure C: Open-ended steel
NASA Astrophysics Data System (ADS)
Zhang, Peng; Peng, Jing; Sims, S. Richard F.
2005-05-01
In ATR applications, each feature is a convolution of an image with a filter. It is important to use most discriminant features to produce compact representations. We propose two novel subspace methods for dimension reduction to address limitations associated with Fukunaga-Koontz Transform (FKT). The first method, Scatter-FKT, assumes that target is more homogeneous, while clutter can be anything other than target and anywhere. Thus, instead of estimating a clutter covariance matrix, Scatter-FKT computes a clutter scatter matrix that measures the spread of clutter from the target mean. We choose dimensions along which the difference in variation between target and clutter is most pronounced. When the target follows a Gaussian distribution, Scatter-FKT can be viewed as a generalization of FKT. The second method, Optimal Bayesian Subspace, is derived from the optimal Bayesian classifier. It selects dimensions such that the minimum Bayes error rate can be achieved. When both target and clutter follow Gaussian distributions, OBS computes optimal subspace representations. We compare our methods against FKT using character image as well as IR data.
Kawase, Takatsugu; Kunieda, Etsuo; Deloar, Hossain M; Tsunoo, Takanori; Seki, Satoshi; Oku, Yohei; Saitoh, Hidetoshi; Saito, Kimiaki; Ogawa, Eileen N; Ishizaka, Akitoshi; Kameyama, Kaori; Kubo, Atsushi
2009-10-01
To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.
NASA Astrophysics Data System (ADS)
Ziegler, Hannes Moritz
Planners and managers often rely on coarse population distribution data from the census for addressing various social, economic, and environmental problems. In the analysis of physical vulnerabilities to sea-level rise, census units such as blocks or block groups are coarse relative to the required decision-making application. This study explores the benefits offered from integrating image classification and dasymetric mapping at the household level to provide detailed small area population estimates at the scale of residential buildings. In a case study of Boca Raton, FL, a sea-level rise inundation grid based on mapping methods by NOAA is overlaid on the highly detailed population distribution data to identify vulnerable residences and estimate population displacement. The enhanced spatial detail offered through this method has the potential to better guide targeted strategies for future development, mitigation, and adaptation efforts.
Mason, Doran M.; Johnson, Timothy B.; Harvey, Chris J.; Kitchell, James F.; Schram, Stephen T.; Bronte, Charles R.; Hoff, Michael H.; Lozano, Stephen J.; Trebitz, Anett S.; Schreiner, Donald R.; Lamon, E. Conrad; Hrabik, Thomas R.
2005-01-01
Lake herring (Coregonus artedi) and rainbow smelt (Osmerus mordax) are a valuable prey resource for the recovering lake trout (Salvelinus namaycush) in Lake Superior. However, prey biomass may be insufficient to support the current predator demand. In August 1997, we assessed the abundance and spatial distribution of pelagic coregonines and rainbow smelt in western Lake Superior by combining a 120 kHz split beam acoustics system with midwater trawls. Coregonines comprised the majority of the midwater trawl catches and the length distributions for trawl caught fish coincided with estimated sizes of acoustic targets. Overall mean pelagic prey fish biomass was 15.56 kg ha−1 with the greatest fish biomass occurring in the Apostle Islands region (27.98 kg ha−1), followed by the Duluth Minnesota region (20.22 kg ha−1), and with the lowest biomass occurring in the open waters of western Lake Superior (9.46 kg ha−1). Biomass estimates from hydroacoustics were typically 2–134 times greater than estimates derived from spring bottom trawl surveys. Prey fish biomass for Lake Superior is about order of magnitude less than acoustic estimates for Lakes Michigan and Ontario. Discrepancies observed between bioenergetics-based estimates of predator consumption of coregonines and earlier coregonine biomass estimates may be accounted for by our hydroacoustic estimates.
Choosing a therapy electron accelerator target.
Hutcheon, R M; Schriber, S O; Funk, L W; Sherman, N K
1979-01-01
Angular distributions of photon depth dose produced by 25-MeV electrons incident on several fully stopping single-element targets (C, Al, Cu, Mo, Ta, Pb) and two composite layered targets (Ni-Al, W-Al) were studied. Depth-dose curves measured using TLD-700 (thermoluminescent dosimeter) chips embedded in lucite phantoms. Several useful therapy electron accelerator design curves were determined, including relative flattener thickness as a function of target atomic number, "effective" bremsstrahlung endpoint energy or beam "hardness" as a function of target atomic number and photon emission angle, and estimates of shielding thickness as a function of angle required to reduce the radiation outside the treatment cone to required levels.
Online Cross-Validation-Based Ensemble Learning
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-01-01
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419
Model-based recognition of 3D articulated target using ladar range data.
Lv, Dan; Sun, Jian-Feng; Li, Qi; Wang, Qi
2015-06-10
Ladar is suitable for 3D target recognition because ladar range images can provide rich 3D geometric surface information of targets. In this paper, we propose a part-based 3D model matching technique to recognize articulated ground military vehicles in ladar range images. The key of this approach is to solve the decomposition and pose estimation of articulated parts of targets. The articulated components were decomposed into isolate parts based on 3D geometric properties of targets, such as surface point normals, data histogram distribution, and data distance relationships. The corresponding poses of these separate parts were estimated through the linear characteristics of barrels. According to these pose parameters, all parts of the target were roughly aligned to 3D point cloud models in a library and fine matching was finally performed to accomplish 3D articulated target recognition. The recognition performance was evaluated with 1728 ladar range images of eight different articulated military vehicles with various part types and orientations. Experimental results demonstrated that the proposed approach achieved a high recognition rate.
Leow, Li-Ann; Gunn, Reece; Marinovic, Welber; Carroll, Timothy J
2017-08-01
When sensory feedback is perturbed, accurate movement is restored by a combination of implicit processes and deliberate reaiming to strategically compensate for errors. Here, we directly compare two methods used previously to dissociate implicit from explicit learning on a trial-by-trial basis: 1 ) asking participants to report the direction that they aim their movements, and contrasting this with the directions of the target and the movement that they actually produce, and 2 ) manipulating movement preparation time. By instructing participants to reaim without a sensory perturbation, we show that reaiming is possible even with the shortest possible preparation times, particularly when targets are narrowly distributed. Nonetheless, reaiming is effortful and comes at the cost of increased variability, so we tested whether constraining preparation time is sufficient to suppress strategic reaiming during adaptation to visuomotor rotation with a broad target distribution. The rate and extent of error reduction under preparation time constraints were similar to estimates of implicit learning obtained from self-report without time pressure, suggesting that participants chose not to apply a reaiming strategy to correct visual errors under time pressure. Surprisingly, participants who reported aiming directions showed less implicit learning according to an alternative measure, obtained during trials performed without visual feedback. This suggests that the process of reporting can affect the extent or persistence of implicit learning. The data extend existing evidence that restricting preparation time can suppress explicit reaiming and provide an estimate of implicit visuomotor rotation learning that does not require participants to report their aiming directions. NEW & NOTEWORTHY During sensorimotor adaptation, implicit error-driven learning can be isolated from explicit strategy-driven reaiming by subtracting self-reported aiming directions from movement directions, or by restricting movement preparation time. Here, we compared the two methods. Restricting preparation times did not eliminate reaiming but was sufficient to suppress reaiming during adaptation with widely distributed targets. The self-report method produced a discrepancy in implicit learning estimated by subtracting aiming directions and implicit learning measured in no-feedback trials. Copyright © 2017 the American Physiological Society.
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
Estimating the Fully Burdened Cost of Fuel Using an Input-Output Model - A Micro-Level Analysis
2011-09-01
The multilocation distribution model used by Lu and Rencheng to evaluate an international supply chain (From: Lu & Rencheng, 2007...IO model to evaluate an international supply chain specifically for a multilocation production system. Figure 2 illustrates such a system. vendor...vendor vendor Target markets Production plants Material vendor Figure 2. The multilocation distribution model used by Lu and Rencheng to
A long-term target detection approach in infrared image sequence
NASA Astrophysics Data System (ADS)
Li, Hang; Zhang, Qi; Wang, Xin; Hu, Chao
2016-10-01
An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on POME(the principle of maximum entropy), target candidates are iteratively segmented. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.
Poor drug distribution as a possible explanation for the results of the PRECISE trial.
Sampson, John H; Archer, Gary; Pedain, Christoph; Wembacher-Schröder, Eva; Westphal, Manfred; Kunwar, Sandeep; Vogelbaum, Michael A; Coan, April; Herndon, James E; Raghavan, Raghu; Brady, Martin L; Reardon, David A; Friedman, Allan H; Friedman, Henry S; Rodríguez-Ponce, M Inmaculada; Chang, Susan M; Mittermeyer, Stephan; Croteau, David; Puri, Raj K
2010-08-01
Convection-enhanced delivery (CED) is a novel intracerebral drug delivery technique with considerable promise for delivering therapeutic agents throughout the CNS. Despite this promise, Phase III clinical trials employing CED have failed to meet clinical end points. Although this may be due to inactive agents or a failure to rigorously validate drug targets, the authors have previously demonstrated that catheter positioning plays a major role in drug distribution using this technique. The purpose of the present work was to retrospectively analyze the expected drug distribution based on catheter positioning data available from the CED arm of the PRECISE trial. Data on catheter positioning from all patients randomized to the CED arm of the PRECISE trial were available for analyses. BrainLAB iPlan Flow software was used to estimate the expected drug distribution. Only 49.8% of catheters met all positioning criteria. Still, catheter positioning score (hazard ratio 0.93, p = 0.043) and the number of optimally positioned catheters (hazard ratio 0.72, p = 0.038) had a significant effect on progression-free survival. Estimated coverage of relevant target volumes was low, however, with only 20.1% of the 2-cm penumbra surrounding the resection cavity covered on average. Although tumor location and resection cavity volume had no effect on coverage volume, estimations of drug delivery to relevant target volumes did correlate well with catheter score (p < 0.003), and optimally positioned catheters had larger coverage volumes (p < 0.002). Only overall survival (p = 0.006) was higher for investigators considered experienced after adjusting for patient age and Karnofsky Performance Scale score. The potential efficacy of drugs delivered by CED may be severely constrained by ineffective delivery in many patients. Routine use of software algorithms and alternative catheter designs and infusion parameters may improve the efficacy of drugs delivered by CED.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Quantitative Analysis of Radar Returns from Insects
NASA Technical Reports Server (NTRS)
Riley, J. R.
1979-01-01
When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.
Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.
2016-09-01
A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.
1990-05-01
CLASSIFICATION AUTPOVITY 3. DISTRIBUTION IAVAILABILITY OF REPORT 2b. P OCLASSIFICATION/OOWNGRADING SC14DULE Approved for public release; distribution 4...in the Red Book should obtain a copy of the Engineering Design Handbook, Army Weapon System Analysis, Part One, DARCOM- P 706-101, November 1977; a...companion volume: Army Weapon System Analysis, Part Two, DARCOM- P 706-102, October 1979, also makes worthwhile study. Both of these documents, written by
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason
2014-01-01
Optical ranging is a problem of estimating the round-trip flight time of a phase- or amplitude-modulated optical beam that reflects off of a target. Frequency- modulated, continuous-wave (FMCW) ranging systems obtain this estimate by performing an interferometric measurement between a local frequency- modulated laser beam and a delayed copy returning from the target. The range estimate is formed by mixing the target-return field with the local reference field on a beamsplitter and detecting the resultant beat modulation. In conventional FMCW ranging, the source modulation is linear in instantaneous frequency, the reference-arm field has many more photons than the target-return field, and the time-of-flight estimate is generated by balanced difference- detection of the beamsplitter output, followed by a frequency-domain peak search. This work focused on determining the maximum-likelihood (ML) estimation algorithm when continuous-time photoncounting detectors are used. It is founded on a rigorous statistical characterization of the (random) photoelectron emission times as a function of the incident optical field, including the deleterious effects caused by dark current and dead time. These statistics enable derivation of the Cramér-Rao lower bound (CRB) on the accuracy of FMCW ranging, and derivation of the ML estimator, whose performance approaches this bound at high photon flux. The estimation algorithm was developed, and its optimality properties were shown in simulation. Experimental data show that it performs better than the conventional estimation algorithms used. The demonstrated improvement is a factor of 1.414 over frequency-domainbased estimation. If the target interrogating photons and the local reference field photons are costed equally, the optimal allocation of photons between these two arms is to have them equally distributed. This is different than the state of the art, in which the local field is stronger than the target return. The optimal processing of the photocurrent processes at the outputs of the two detectors is to perform log-matched filtering followed by a summation and peak detection. This implies that neither difference detection, nor Fourier-domain peak detection, which are the staples of the state-of-the-art systems, is optimal when a weak local oscillator is employed.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
Detection, monitoring, and evaluation of spatio-temporal change in mosquito populations
USDA-ARS?s Scientific Manuscript database
USDA-ARS scientists seek to implement a sampling and global information technology based system that can be used for mosquito detection and trap deployment, to estimate mosquito species composition and distribution in space and time, and for targeting and evaluation of mosquito controls. Knowledge ...
Online cross-validation-based ensemble learning.
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2018-01-30
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A testable model of earthquake probability based on changes in mean event size
NASA Astrophysics Data System (ADS)
Imoto, Masajiro
2003-02-01
We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
Sharing global CO2 emission reductions among one billion high emitters
Chakravarty, Shoibal; Chikkatur, Ananth; de Coninck, Heleen; Pacala, Stephen; Socolow, Robert; Tavoni, Massimo
2009-01-01
We present a framework for allocating a global carbon reduction target among nations, in which the concept of “common but differentiated responsibilities” refers to the emissions of individuals instead of nations. We use the income distribution of a country to estimate how its fossil fuel CO2 emissions are distributed among its citizens, from which we build up a global CO2 distribution. We then propose a simple rule to derive a universal cap on global individual emissions and find corresponding limits on national aggregate emissions from this cap. All of the world's high CO2-emitting individuals are treated the same, regardless of where they live. Any future global emission goal (target and time frame) can be converted into national reduction targets, which are determined by “Business as Usual” projections of national carbon emissions and in-country income distributions. For example, reducing projected global emissions in 2030 by 13 GtCO2 would require the engagement of 1.13 billion high emitters, roughly equally distributed in 4 regions: the U.S., the OECD minus the U.S., China, and the non-OECD minus China. We also modify our methodology to place a floor on emissions of the world's lowest CO2 emitters and demonstrate that climate mitigation and alleviation of extreme poverty are largely decoupled. PMID:19581586
Yi, WenJun; Wang, Ping; Fu, MeiCheng; Tan, JiChun; Zhu, Jubo; Li, XiuJian
2017-07-10
In order to overcome the shortages of the target image restoration method for longitudinal laser tomography using self-calibration, a more general restoration method through backscattering medium images associated with prior parameters is developed for common conditions. The system parameters are extracted from pre-calibration, and the LIDAR ratio is estimated according to the medium types. Assisted by these prior parameters, the degradation caused by inhomogeneous turbid media can be established with the backscattering medium images, which can further be used for removal of the interferences of turbid media. The results of simulations and experiments demonstrate that the proposed image restoration method can effectively eliminate the inhomogeneous interferences of turbid media and achieve exactly the reflectivity distribution of targets behind inhomogeneous turbid media. Furthermore, the restoration method can work beyond the limitation of the previous method that only works well under the conditions of localized turbid attenuations and some types of targets with fairly uniform reflectivity distributions.
Distributed multi-sensor particle filter for bearings-only tracking
NASA Astrophysics Data System (ADS)
Zhang, Jungen; Ji, Hongbing
2012-02-01
In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.
NASA Astrophysics Data System (ADS)
Edwards, R. D.; Sinclair, M. A.; Goldsack, T. J.; Krushelnick, K.; Beg, F. N.; Clark, E. L.; Dangor, A. E.; Najmudin, Z.; Tatarakis, M.; Walton, B.; Zepf, M.; Ledingham, K. W. D.; Spencer, I.; Norreys, P. A.; Clarke, R. J.; Kodama, R.; Toyama, Y.; Tampo, M.
2002-03-01
The application of high intensity laser-produced gamma rays is discussed with regard to picosecond resolution deep-penetration radiography. The spectrum and angular distribution of these gamma rays is measured using an array of thermoluminescent detectors for both an underdense (gas) target and an overdense (solid) target. It is found that the use of an underdense target in a laser plasma accelerator configuration produces a much more intense and directional source. The peak dose is also increased significantly. Radiography is demonstrated in these experiments and the source size is also estimated.
Using eDNA to estimate distribution of fish species in a complex river system (presentation)
Environmental DNA (eDNA) analysis of biological material shed by aquatic organisms is a noninvasive genetic tool that can improve efficiency and reduce costs associated with species detection in aquatic systems. eDNA methods are widely used to assess presence/absence of a target ...
2009-11-01
times were shorter, collisions were fewer, and more targets were photographed. Effects of video game experience and spatial ability were also...Control Spatial ability, video game , user-interface, remote control, robot TR 1230 The Perception and Estimation of Egocentric Distance in Real and...development by RDECOM-STTC, and ARI is using the AW-VTT to research challenges in the use of distributed, game -based simulations for training
Laser radar cross-section estimation from high-resolution image data.
Osche, G R; Seeber, K N; Lok, Y F; Young, D S
1992-05-10
A methodology for the estimation of ladar cross sections from high-resolution image data of geometrically complex targets is presented. Coherent CO(2) laser radar was used to generate high-resolution amplitude imagery of a UC-8 Buffalo test aircraft at a range of 1.3 km at nine different aspect angles. The average target ladar cross section was synthesized from these data and calculated to be sigma(T) = 15.4 dBsm, which is similar to the expected microwave radar cross sections. The aspect angle dependence of the cross section shows pronounced peaks at nose on and broadside, which are also in agreement with radar results. Strong variations in both the mean amplitude and the statistical distributions of amplitude with the aspect angle have also been observed. The relative mix of diffuse and specular returns causes significant deviations from a simple Lambertian or Swerling II target, especially at broadside where large normal surfaces are present.
The airborne laser ranging system, its capabilities and applications
NASA Technical Reports Server (NTRS)
Kahn, W. D.; Degnan, J. J.; Englar, T. S., Jr.
1982-01-01
The airborne laser ranging system is a multibeam short pulse laser ranging system on board an aircraft. It simultaneously measures the distances between the aircraft and six laser retroreflectors (targets) deployed on the Earth's surface. The system can interrogate over 100 targets distributed over an area of 25,000 sq, kilometers in a matter of hours. Potentially, a total of 1.3 million individual range measurements can be made in a six hour flight. The precision of these range measurements is approximately + or - 1 cm. These measurements are used in procedure which is basically an extension of trilateration techniques to derive the intersite vector between the laser ground targets. By repeating the estimation of the intersite vector, strain and strain rate errors can be estimated. These quantities are essential for crustal dynamic studies which include determination and monitoring of regional strain in the vicinity of active fault zones, land subsidence, and edifice building preceding volcanic eruptions.
Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis.
Liu, Jeng-Cheng; Cheng, Yuang-Tung; Hung, Hsien-Sen
2018-01-19
Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method.
Progress Toward Efficient Laminar Flow Analysis and Design
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Campbell, Matthew L.; Streit, Thomas
2011-01-01
A multi-fidelity system of computer codes for the analysis and design of vehicles having extensive areas of laminar flow is under development at the NASA Langley Research Center. The overall approach consists of the loose coupling of a flow solver, a transition prediction method and a design module using shell scripts, along with interface modules to prepare the input for each method. This approach allows the user to select the flow solver and transition prediction module, as well as run mode for each code, based on the fidelity most compatible with the problem and available resources. The design module can be any method that designs to a specified target pressure distribution. In addition to the interface modules, two new components have been developed: 1) an efficient, empirical transition prediction module (MATTC) that provides n-factor growth distributions without requiring boundary layer information; and 2) an automated target pressure generation code (ATPG) that develops a target pressure distribution that meets a variety of flow and geometry constraints. The ATPG code also includes empirical estimates of several drag components to allow the optimization of the target pressure distribution. The current system has been developed for the design of subsonic and transonic airfoils and wings, but may be extendable to other speed ranges and components. Several analysis and design examples are included to demonstrate the current capabilities of the system.
Effects of window size and shape on accuracy of subpixel centroid estimation of target images
NASA Technical Reports Server (NTRS)
Welch, Sharon S.
1993-01-01
A new algorithm is presented for increasing the accuracy of subpixel centroid estimation of (nearly) point target images in cases where the signal-to-noise ratio is low and the signal amplitude and shape vary from frame to frame. In the algorithm, the centroid is calculated over a data window that is matched in width to the image distribution. Fourier analysis is used to explain the dependency of the centroid estimate on the size of the data window, and simulation and experimental results are presented which demonstrate the effects of window size for two different noise models. The effects of window shape were also investigated for uniform and Gaussian-shaped windows. The new algorithm was developed to improve the dynamic range of a close-range photogrammetric tracking system that provides feedback for control of a large gap magnetic suspension system (LGMSS).
A New Model for Acquiescence at the Interface of Psychometrics and Cognitive Psychology.
Plieninger, Hansjörg; Heck, Daniel W
2018-05-29
When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt ( 2012a ) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt's model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.
An Approach to the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.
1997-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integral turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the laminar flow toward the desired amount. An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
An approach to the constrained design of natural laminar flow airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford Earl
1995-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integml turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the larninar flow toward the desired amounl An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
Analysis of a simulation algorithm for direct brain drug delivery
Rosenbluth, Kathryn Hammond; Eschermann, Jan Felix; Mittermeyer, Gabriele; Thomson, Rowena; Mittermeyer, Stephan; Bankiewicz, Krystof S.
2011-01-01
Convection enhanced delivery (CED) achieves targeted delivery of drugs with a pressure-driven infusion through a cannula placed stereotactically in the brain. This technique bypasses the blood brain barrier and gives precise distributions of drugs, minimizing off-target effects of compounds such as viral vectors for gene therapy or toxic chemotherapy agents. The exact distribution is affected by the cannula positioning, flow rate and underlying tissue structure. This study presents an analysis of a simulation algorithm for predicting the distribution using baseline MRI images acquired prior to inserting the cannula. The MRI images included diffusion tensor imaging (DTI) to estimate the tissue properties. The algorithm was adapted for the devices and protocols identified for upcoming trials and validated with direct MRI visualization of Gadolinium in 20 infusions in non-human primates. We found strong agreement between the size and location of the simulated and gadolinium volumes, demonstrating the clinical utility of this surgical planning algorithm. PMID:21945468
Seasonal influenza vaccine dose distribution in 157 countries (2004-2011).
Palache, Abraham; Oriol-Mathieu, Valerie; Abelin, Atika; Music, Tamara
2014-11-12
Globally there are an estimated 3-5 million cases of severe influenza illness every year, resulting in 250,000-500,000 deaths. At the World Health Assembly in 2003, World Health Organization (WHO) resolved to increase influenza vaccine coverage rates (VCR) for high-risk groups, particularly focusing on at least 75% of the elderly by 2010. But systematic worldwide data have not been available to assist public health authorities to monitor vaccine uptake and review progress toward vaccination coverage targets. In 2008, the International Federation of Pharmaceutical Manufacturers and Associations Influenza Vaccine Supply task force (IFPMA IVS) developed a survey methodology to assess global influenza vaccine dose distribution. The current survey results represent 2011 data and demonstrate the evolution of the absolute number distributed between 2004 and 2011 inclusive, and the evolution in the per capita doses distributed in 2008-2011. Global distribution of IFPMA IVS member doses increased approximately 86.9% between 2004 and 2011, but only approximately 12.1% between 2008 and 2011. The WHO's regions in Eastern Mediterranean (EMRO), Southeast Asian (SEARO) and Africa (AFRO) together account for about 47% of the global population, but only 3.7% of all IFPMA IVS doses distributed. While distributed doses have globally increased, they have decreased in EURO and EMRO since 2009. Dose distribution can provide a reasonable proxy of vaccine utilization. Based on the dose distribution, we conclude that seasonal influenza VCR in many countries remains well below the WHA's VCR targets and below the recommendations of the Council of the European Union in EURO. Inter- and intra-regional disparities in dose distribution trends call into question the impact of current vaccine recommendations at achieving coverage targets. Additional policy measures, particularly those that influence patients adherence to vaccination programs, such as reimbursement, healthcare provider knowledge, attitudes, practices, and communications, are required for VCR targets to be met and benefit public health. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Jarzembski, Maurice A.; Srivastava, Vandana
1998-01-01
Backscatter of several Earth surfaces was characterized in the laboratory as a function of incidence angle with a focused continuous-wave 9.1 micro meter CO2 Doppler lidar for use as possible calibration targets. Some targets showed negligible angular dependence, while others showed a slight increase with decreasing angle. The Earth-surface signal measured over the complex Californian terrain during a 1995 NASA airborne mission compared well with laboratory data. Distributions of the Earth's surface signal shows that the lidar efficiency can be estimated with a fair degree of accuracy, preferably with uniform Earth-surface targets during flight for airborne or space-based lidar.
Calibration of a polarimetric imaging SAR
NASA Technical Reports Server (NTRS)
Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.
1991-01-01
Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.
A long-term target detection approach in infrared image sequence
NASA Astrophysics Data System (ADS)
Li, Hang; Zhang, Qi; Li, Yuanyuan; Wang, Liqiang
2015-12-01
An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on non-linear histogram equalization, target candidates are coarse-to-fine segmented by using two self-adapt thresholds generated in the intensity space. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to iteratively estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.
Moreno-Salinas, David; Pascoal, Antonio; Aranda, Joaquin
2013-08-12
In this paper, we address the problem of determining the optimal geometric configuration of an acoustic sensor network that will maximize the angle-related information available for underwater target positioning. In the set-up adopted, a set of autonomous vehicles carries a network of acoustic units that measure the elevation and azimuth angles between a target and each of the receivers on board the vehicles. It is assumed that the angle measurements are corrupted by white Gaussian noise, the variance of which is distance-dependent. Using tools from estimation theory, the problem is converted into that of minimizing, by proper choice of the sensor positions, the trace of the inverse of the Fisher Information Matrix (also called the Cramer-Rao Bound matrix) to determine the sensor configuration that yields the minimum possible covariance of any unbiased target estimator. It is shown that the optimal configuration of the sensors depends explicitly on the intensity of the measurement noise, the constraints imposed on the sensor configuration, the target depth and the probabilistic distribution that defines the prior uncertainty in the target position. Simulation examples illustrate the key results derived.
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Quantification of DNA cleavage specificity in Hi-C experiments.
Meluzzi, Dario; Arya, Gaurav
2016-01-08
Hi-C experiments produce large numbers of DNA sequence read pairs that are typically analyzed to deduce genomewide interactions between arbitrary loci. A key step in these experiments is the cleavage of cross-linked chromatin with a restriction endonuclease. Although this cleavage should happen specifically at the enzyme's recognition sequence, an unknown proportion of cleavage events may involve other sequences, owing to the enzyme's star activity or to random DNA breakage. A quantitative estimation of these non-specific cleavages may enable simulating realistic Hi-C read pairs for validation of downstream analyses, monitoring the reproducibility of experimental conditions and investigating biophysical properties that correlate with DNA cleavage patterns. Here we describe a computational method for analyzing Hi-C read pairs to estimate the fractions of cleavages at different possible targets. The method relies on expressing an observed local target distribution downstream of aligned reads as a linear combination of known conditional local target distributions. We validated this method using Hi-C read pairs obtained by computer simulation. Application of the method to experimental Hi-C datasets from murine cells revealed interesting similarities and differences in patterns of cleavage across the various experiments considered. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Ultrasonic Porosity Estimation of Low-Porosity Ceramic Samples
NASA Astrophysics Data System (ADS)
Eskelinen, J.; Hoffrén, H.; Kohout, T.; Hæggström, E.; Pesonen, L. J.
2007-03-01
We report on efforts to extend the applicability of an airborne ultrasonic pulse-reflection (UPR) method towards lower porosities. UPR is a method that has been used successfully to estimate porosity and tortuosity of high porosity foams. UPR measures acoustical reflectivity of a target surface at two or more incidence angles. We used ceramic samples to evaluate the feasibility of extending the UPR range into low porosities (<35%). The validity of UPR estimates depends on pore size distribution and probing frequency as predicted by the theoretical boundary conditions of the used equivalent fluid model under the high-frequency approximation.
Hydroacoustic estimates of fish biomass and spatial distributions in shallow lakes
NASA Astrophysics Data System (ADS)
Lian, Yuxi; Huang, Geng; Godlewska, Małgorzata; Cai, Xingwei; Li, Chang; Ye, Shaowen; Liu, Jiashou; Li, Zhongjie
2017-03-01
We conducted acoustical surveys with a horizontal beam transducer to detect fish and with a vertical beam transducer to detect depth and macrophytes in two typical shallow lakes along the middle and lower reaches of the Changjiang (Yangtze) River in November 2013. Both lakes are subject to active fish management with annual stocking and removal of large fish. The purpose of the study was to compare hydroacoustic horizontal beam estimates with fish landings. The preliminary results show that the fish distribution patterns differed in the two lakes and were affected by water depth and macrophyte coverage. The hydroacoustically estimated fish biomass matched the commercial catch very well in Niushan Lake, but it was two times higher in Kuilei Lake. However, acoustic estimates included all fish, whereas the catch included only fish >45 cm (smaller ones were released). We were unable to determine the proper regression between acoustic target strength and fish length for the dominant fish species in the two lakes.
Hydroacoustic estimates of fish biomass and spatial distributions in shallow lakes
NASA Astrophysics Data System (ADS)
Lian, Yuxi; Huang, Geng; Godlewska, Małgorzata; Cai, Xingwei; Li, Chang; Ye, Shaowen; Liu, Jiashou; Li, Zhongjie
2018-03-01
We conducted acoustical surveys with a horizontal beam transducer to detect fish and with a vertical beam transducer to detect depth and macrophytes in two typical shallow lakes along the middle and lower reaches of the Changjiang (Yangtze) River in November 2013. Both lakes are subject to active fish management with annual stocking and removal of large fish. The purpose of the study was to compare hydroacoustic horizontal beam estimates with fish landings. The preliminary results show that the fish distribution patterns differed in the two lakes and were affected by water depth and macrophyte coverage. The hydroacoustically estimated fish biomass matched the commercial catch very well in Niushan Lake, but it was two times higher in Kuilei Lake. However, acoustic estimates included all fish, whereas the catch included only fish >45 cm (smaller ones were released). We were unable to determine the proper regression between acoustic target strength and fish length for the dominant fish species in the two lakes.
[Estimated mammogram coverage in Goiás State, Brazil].
Corrêa, Rosangela da Silveira; Freitas-Júnior, Ruffo; Peixoto, João Emílio; Rodrigues, Danielle Cristina Netto; Lemos, Maria Eugênia da Fonseca; Marins, Lucy Aparecida Parreira; Silveira, Erika Aparecida da
2011-09-01
This cross-sectional study aimed to estimate mammogram coverage in the State of Goiás, Brazil, describing the supply, demand, and variations in different age groups, evaluating 98 mammography services as observational units. We estimated the mammogram rates by age group and type of health service, as well as the number of tests required to cover 70% and 100% of the target population. We assessed the association between mammograms, geographical distribution of mammography machines, type of service, and age group. Full coverage estimates, considering 100% of women in the 40-69 and 50-69-year age brackets, were 61% and 66%, of which the Brazilian Unified National Health System provided 13% and 14%, respectively. To achieve 70% coverage, 43,424 additional mammograms would be needed. All the associations showed statistically significant differences (p < 0.001). We conclude that mammogram coverage is unevenly distributed in the State of Goiás and that fewer tests are performed than required.
Dumitru, Adrian; Lappi, Tuomas; Skokov, Vladimir
2015-12-17
In this study, we determine the distribution of linearly polarized gluons of a dense target at small x by solving the Balitsky–Jalilian-Marian–Iancu–McLerran–Weigert–Leonidov–Kovner rapidity evolution equations. From these solutions, we estimate the amplitude of cos2Φ azimuthal asymmetries in deep inelastic scattering dijet production at high energies. We find sizable long-range in rapidity azimuthal asymmetries with a magnitude in the range of v 2=~10%.
Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu
2017-06-21
For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.
Earl, Brian R.; Chertoff, Mark E.
2012-01-01
Future implementation of regenerative treatments for sensorineural hearing loss may be hindered by the lack of diagnostic tools that specify the target(s) within the cochlea and auditory nerve for delivery of therapeutic agents. Recent research has indicated that the amplitude of high-level compound action potentials (CAPs) is a good predictor of overall auditory nerve survival, but does not pinpoint the location of neural damage. A location-specific estimate of nerve pathology may be possible by using a masking paradigm and high-level CAPs to map auditory nerve firing density throughout the cochlea. This initial study in gerbil utilized a high-pass masking paradigm to determine normative ranges for CAP-derived neural firing density functions using broadband chirp stimuli and low-frequency tonebursts, and to determine if cochlear outer hair cell (OHC) pathology alters the distribution of neural firing in the cochlea. Neural firing distributions for moderate-intensity (60 dB pSPL) chirps were affected by OHC pathology whereas those derived with high-level (90 dB pSPL) chirps were not. These results suggest that CAP-derived neural firing distributions for high-level chirps may provide an estimate of auditory nerve survival that is independent of OHC pathology. PMID:22280596
Estimation of potential distribution of gas hydrate in the northern South China Sea
NASA Astrophysics Data System (ADS)
Wang, Chunjuan; Du, Dewen; Zhu, Zhiwei; Liu, Yonggang; Yan, Shijuan; Yang, Gang
2010-05-01
Gas hydrate research has significant importance for securing world energy resources, and has the potential to produce considerable economic benefits. Previous studies have shown that the South China Sea is an area that harbors gas hydrates. However, there is a lack of systematic investigations and understanding on the distribution of gas hydrate throughout the region. In this paper, we applied mineral resource quantitative assessment techniques to forecast and estimate the potential distribution of gas hydrate resources in the northern South China Sea. However, current hydrate samples from the South China Sea are too few to produce models of occurrences. Thus, according to similarity and contrast principles of mineral outputs, we can use a similar hydrate-mining environment with sufficient gas hydrate data as a testing ground for modeling northern South China Sea gas hydrate conditions. We selected the Gulf of Mexico, which has extensively studied gas hydrates, to develop predictive models of gas hydrate distributions, and to test errors in the model. Then, we compared the existing northern South China Sea hydrate-mining data with the Gulf of Mexico characteristics, and collated the relevant data into the model. Subsequently, we applied the model to the northern South China Sea to obtain the potential gas hydrate distribution of the area, and to identify significant exploration targets. Finally, we evaluated the reliability of the predicted results. The south seabed area of Taiwan Bank is recommended as a priority exploration target. The Zhujiang Mouth, Southeast Hainan, and Southwest Taiwan Basins, including the South Bijia Basin, also are recommended as exploration target areas. In addition, the method in this paper can provide a useful predictive approach for gas hydrate resource assessment, which gives a scientific basis for construction and implementation of long-term planning for gas hydrate exploration and general exploitation of the seabed of China.
Modelling supply and demand of bioenergy from short rotation coppice and Miscanthus in the UK.
Bauen, A W; Dunnett, A J; Richter, G M; Dailey, A G; Aylott, M; Casella, E; Taylor, G
2010-11-01
Biomass from lignocellulosic energy crops can contribute to primary energy supply in the short term in heat and electricity applications and in the longer term in transport fuel applications. This paper estimates the optimal feedstock allocation of herbaceous and woody lignocellulosic energy crops for England and Wales based on empirical productivity models. Yield maps for Miscanthus, willow and poplar, constrained by climatic, soil and land use factors, are used to estimate the potential resource. An energy crop supply-cost curve is estimated based on the resource distribution and associated production costs. The spatial resource model is then used to inform the supply of biomass to geographically distributed demand centres, with co-firing plants used as an illustration. Finally, the potential contribution of energy crops to UK primary energy and renewable energy targets is discussed. Copyright 2010 Elsevier Ltd. All rights reserved.
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Novel branching particle method for tracking
NASA Astrophysics Data System (ADS)
Ballantyne, David J.; Chan, Hubert Y.; Kouritzin, Michael A.
2000-07-01
Particle approximations are used to track a maneuvering signal given only a noisy, corrupted sequence of observations, as are encountered in target tracking and surveillance. The signal exhibits nonlinearities that preclude the optimal use of a Kalman filter. It obeys a stochastic differential equation (SDE) in a seven-dimensional state space, one dimension of which is a discrete maneuver type. The maneuver type switches as a Markov chain and each maneuver identifies a unique SDE for the propagation of the remaining six state parameters. Observations are constructed at discrete time intervals by projecting a polygon corresponding to the target state onto two dimensions and incorporating the noise. A new branching particle filter is introduced and compared with two existing particle filters. The filters simulate a large number of independent particles, each of which moves with the stochastic law of the target. Particles are weighted, redistributed, or branched, depending on the method of filtering, based on their accordance with the current observation from the sequence. Each filter provides an approximated probability distribution of the target state given all back observations. All three particle filters converge to the exact conditional distribution as the number of particles goes to infinity, but differ in how well they perform with a finite number of particles. Using the exactly known ground truth, the root-mean-squared (RMS) errors in target position of the estimated distributions from the three filters are compared. The relative tracking power of the filters is quantified for this target at varying sizes, particle counts, and levels of observation noise.
NASA Astrophysics Data System (ADS)
Flores, A. N.; Entekhabi, D.; Bras, R. L.
2007-12-01
Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.
Estimating air chemical emissions from research activities using stack measurement data.
Ballinger, Marcel Y; Duchsherer, Cheryl J; Woodruff, Rodger K; Larson, Timothy V
2013-03-01
Current methods of estimating air emissions from research and development (R&D) activities use a wide range of release fractions or emission factors with bases ranging from empirical to semi-empirical. Although considered conservative, the uncertainties and confidence levels of the existing methods have not been reported. Chemical emissions were estimated from sampling data taken from four research facilities over 10 years. The approach was to use a Monte Carlo technique to create distributions of annual emission estimates for target compounds detected in source test samples. Distributions were created for each year and building sampled for compounds with sufficient detection frequency to qualify for the analysis. The results using the Monte Carlo technique without applying a filter to remove negative emission values showed almost all distributions spanning zero, and 40% of the distributions having a negative mean. This indicates that emissions are so low as to be indistinguishable from building background. Application of a filter to allow only positive values in the distribution provided a more realistic value for emissions and increased the distribution mean by an average of 16%. Release fractions were calculated by dividing the emission estimates by a building chemical inventory quantity. Two variations were used for this quantity: chemical usage, and chemical usage plus one-half standing inventory. Filters were applied so that only release fraction values from zero to one were included in the resulting distributions. Release fractions had a wide range among chemicals and among data sets for different buildings and/or years for a given chemical. Regressions of release fractions to molecular weight and vapor pressure showed weak correlations. Similarly, regressions of mean emissions to chemical usage, chemical inventory, molecular weight, and vapor pressure also gave weak correlations. These results highlight the difficulties in estimating emissions from R&D facilities using chemical inventory data. Air emissions from research operations are difficult to estimate because of the changing nature of research processes and the small quantity and wide variety of chemicals used. Analysis of stack measurements taken over multiple facilities and a 10-year period using a Monte Carlo technique provided a method to quantify the low emissions and to estimate release fractions based on chemical inventories. The variation in release fractions did not correlate well with factors investigated, confirming the complexities in estimating R&D emissions.
A real-time optical tracking and measurement processing system for flying targets.
Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu
2014-01-01
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control.
A Real-Time Optical Tracking and Measurement Processing System for Flying Targets
Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu
2014-01-01
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control. PMID:24987748
Ejecta velocity distribution for impact cratering experiments on porous and low strength targets
NASA Astrophysics Data System (ADS)
Michikami, Tatsuhiro; Moriguchi, Kouichi; Hasegawa, Sunao; Fujiwara, Akira
2007-01-01
Impact cratering experiments on porous targets with various compressive strength ranging from ˜0.5 to ˜250 MPa were carried out in order to investigate the relationship between the ejecta velocity, and material strength or porosity of the target. A spherical alumina projectile (diameter ˜1 mm) was shot perpendicularly into the target surface with velocity ranging from 1.2 to 4.5 km/s (nominal 4 km/s), using a two-stage light-gas gun. The ejecta velocity was estimated from the fall point distance of ejecta. The results show that there are in fact a large fraction of ejecta with very low velocities when the material strength of the target is small and the porosity is high. As an example, in the case of one specific target (compressive strength ˜0.5 MPa and porosity 43%), the amount of ejecta with velocities lower than 1 m/s is about 40% of the total mass. The average velocity of the ejecta decreases with decreasing material strength or increasing the porosity of the target. Moreover, in our experiments, the ejecta velocity distributions normalized to total ejecta mass seem to be mainly dependent on the material strength of the target, and not so greatly on the porosity. We also compare our experimental results with those of Gault et al. [1963. Spray ejected from the lunar surface by meteoroid impact. NASA Technical Note D-1767] and Housen [1992. Crater ejecta velocities for impacts on rocky bodies. LPSC XXIII, 555-556] for the ejecta velocity distribution using Housen's nondimensional scaling parameter. The ejecta velocity distributions of our experiments are lower than those of Gault et al. [1963. Spray ejected from the lunar surface by meteoroid impact. NASA Technical Note D-1767] and Housen [1992. Crater ejecta velocities for impacts on rocky bodies. LPSC XIII, 555-556].
A tiered approach for integrating exposure and dosimetry with ...
High-throughput (HT) risk screening approaches apply in vitro dose-response data to estimate potential health risks that arise from exposure to chemicals. However, much uncertainty is inherent in relating bioactivities observed in an in vitro system to the perturbations of biological mechanisms that lead to apical adverse health outcomes in living organisms. The chemical-agnostic Adverse Outcome Pathway (AOP) framework addresses this uncertainty by acting as a scaffold onto which pathway-based data can be arranged to aid in the understanding of in vitro toxicity testing results. In addition, risk estimation also requires reconciling chemical concentrations sufficient to produce bioactivity in vitro with concentrations that trigger a molecular initiating event (MIE) at the relevant biological target in vivo. Such target site exposures (TSEs) can be estimated using computational models to integrate exposure information with a chemical’s absorption, distribution, metabolism, and elimination (ADME) processes. In this presentation, the utility of a tiered approach for integrating exposure, ADME, and hazard into risk-based decision making will be demonstrated using several case studies, along with the investigation of how uncertainties in exposure and ADME might impact risk estimates. These case studies involve 1) identifying and prioritizing chemicals capable of altering biological pathways based on their potential to reach an in vivo target; 2) evaluating the infl
Investigation of gunshot residue patterns using milli-XRF-techniques: first experiences in casework
NASA Astrophysics Data System (ADS)
Schumacher, Rüdiger; Barth, Martin; Neimke, Dieter; Niewöhner, Ludwig
2010-06-01
The investigation of gunshot residue (GSR) patterns for shooting range estimation is usually based on visualizing the lead, copper, or nitrocellulose distributions on targets like fabric or adhesive tape by chemographic color tests. The method usually provides good results but has its drawbacks when it comes to the examination of ammunition containing lead-free primers or bloody clothing. A milli-X-ray fluorescence (m-XRF) spectrometer with a large motorized stage can help to circumvent these problems allowing the acquisition of XRF mappings of relatively large areas (up to 20 x 20 cm) in millimeter resolution within reasonable time (2-10 hours) for almost all elements. First experiences in GSR casework at the Forensic Science Institute of the Bundeskriminalamt (BKA) have shown, that m-XRF is a useful supplementation for conventional methods in shooting ranges estimation, which helps if there are problems in transferring a GSR pattern to secondary targets (e.g. bloody or stained garments) or if there is no suitable color test available for the element of interest. The resulting elemental distributions are a good estimate for the shooting range and can be evaluated by calculating radial distributions or integrated count rates of irregular shaped regions like pieces of human skin which are too small to be investigated with a conventional WD-XRF spectrometer. Beside a mapping mode the milli-XRF offers also point and line scan modes which can also be utilized in gunshot crime investigations as a quick survey tool to identify bullet holes based on the elements present in the wipe ring.
Mann, J. John; Ogden, R. Todd
2017-01-01
Background and aim Estimation of a PET tracer’s non-displaceable distribution volume (VND) is required for quantification of specific binding to its target of interest. VND is generally assumed to be comparable brain-wide and is determined either from a reference region devoid of the target, often not available for many tracers and targets, or by imaging each subject before and after blocking the target with another molecule that has high affinity for the target, which is cumbersome and involves additional radiation exposure. Here we propose, and validate for the tracers [11C]DASB and [11C]CUMI-101, a new data-driven hybrid deconvolution approach (HYDECA) that determines VND at the individual level without requiring either a reference region or a blocking study. Methods HYDECA requires the tracer metabolite-corrected concentration curve in blood plasma and uses a singular value decomposition to estimate the impulse response function across several brain regions from measured time activity curves. HYDECA decomposes each region’s impulse response function into the sum of a parametric non-displaceable component, which is a function of VND, assumed common across regions, and a nonparametric specific component. These two components differentially contribute to each impulse response function. Different regions show different contributions of the two components, and HYDECA examines data across regions to find a suitable common VND. HYDECA implementation requires determination of two tuning parameters, and we propose two strategies for objectively selecting these parameters for a given tracer: using data from blocking studies, and realistic simulations of the tracer. Using available test-retest data, we compare HYDECA estimates of VND and binding potentials to those obtained based on VND estimated using a purported reference region. Results For [11C]DASB and [11C]CUMI-101, we find that regardless of the strategy used to optimize the tuning parameters, HYDECA provides considerably less biased estimates of VND than those obtained, as is commonly done, using a non-ideal reference region. HYDECA test-retest reproducibility is comparable to that obtained using a VND determined from a non-ideal reference region, when considering the binding potentials BPP and BPND. Conclusions HYDECA can provide subject-specific estimates of VND without requiring a blocking study for tracers and targets for which a valid reference region does not exist. PMID:28459878
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
Evaluation of nonrigid registration models for interfraction dose accumulation in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janssens, Guillaume; Orban de Xivry, Jonathan; Fekkes, Stein
2009-09-15
Purpose: Interfraction dose accumulation is necessary to evaluate the dose distribution of an entire course of treatment by adding up multiple dose distributions of different treatment fractions. This accumulation of dose distributions is not straightforward as changes in the patient anatomy may occur during treatment. For this purpose, the accuracy of nonrigid registration methods is assessed for dose accumulation based on the calculated deformations fields. Methods: A phantom study using a deformable cubic silicon phantom with implanted markers and a cylindrical silicon phantom with MOSFET detectors has been performed. The phantoms were deformed and images were acquired using a cone-beammore » CT imager. Dose calculations were performed on these CT scans using the treatment planning system. Nonrigid CT-based registration was performed using two different methods, the Morphons and Demons. The resulting deformation field was applied on the dose distribution. For both phantoms, accuracy of the registered dose distribution was assessed. For the cylindrical phantom, also measured dose values in the deformed conditions were compared with the dose values of the registered dose distributions. Finally, interfraction dose accumulation for two treatment fractions of a patient with primary rectal cancer has been performed and evaluated using isodose lines and the dose volume histograms of the target volume and normal tissue. Results: A significant decrease in the difference in marker or MOSFET position was observed after nonrigid registration methods (p<0.001) for both phantoms and with both methods, as well as a significant decrease in the dose estimation error (p<0.01 for the cubic phantom and p<0.001 for the cylindrical) with both methods. Considering the whole data set at once, the difference between estimated and measured doses was also significantly decreased using registration (p<0.001 for both methods). The patient case showed a slightly underdosed planning target volume and an overdosed bladder volume due to anatomical deformations. Conclusions: Dose accumulation using nonrigid registration methods is possible using repeated CT imaging. This opens possibilities for interfraction dose accumulation and adaptive radiotherapy to incorporate possible differences in dose delivered to the target volume and organs at risk due to anatomical deformations.« less
Zheng, Wenjing; van der Laan, Mark
2017-01-01
In this paper, we study the effect of a time-varying exposure mediated by a time-varying intermediate variable. We consider general longitudinal settings, including survival outcomes. At a given time point, the exposure and mediator of interest are influenced by past covariates, mediators and exposures, and affect future covariates, mediators and exposures. Right censoring, if present, occurs in response to past history. To address the challenges in mediation analysis that are unique to these settings, we propose a formulation in terms of random interventions based on conditional distributions for the mediator. This formulation, in particular, allows for well-defined natural direct and indirect effects in the survival setting, and natural decomposition of the standard total effect. Upon establishing identifiability and the corresponding statistical estimands, we derive the efficient influence curves and establish their robustness properties. Applying Targeted Maximum Likelihood Estimation, we use these efficient influence curves to construct multiply robust and efficient estimators. We also present an inverse probability weighted estimator and a nested non-targeted substitution estimator for these parameters. PMID:29387520
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2017-01-01
This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583
Bilkovic, Donna Marie; Havens, Kirk; Stanhope, David; Angstadt, Kory
2014-03-15
Derelict fishing gear is a source of mortality for target and non-target marine species. A program employing commercial watermen to remove marine debris provided a novel opportunity to collect extensive spatially-explicit information for four consecutive winters (2008-2012) on the type, distribution, and abundance of derelict fishing gear and bycatch in Virginia waters of Chesapeake Bay. The most abundant form of derelict gear recovered was blue crab pots with almost 32,000 recovered. Derelict pots were widely distributed, but with notable hotspot areas, capturing 40 species and over 31,000 marine organisms. The target species, blue crab, experienced the highest mortality from lost pots with an estimated 900,000 animals killed each year, a potential annual economic loss to the fishery of $300,000. Important fishery species were captured and killed in derelict pots including Atlantic croaker and black sea bass. While some causes of gear loss are unavoidable, others can be managed to minimize loss. Copyright © 2014 Elsevier Ltd. All rights reserved.
Measuring Aptamer Equilbria Using Gradient Micro Free Flow Electrophoresis
Turgeon, Ryan T.; Fonslow, Bryan R.; Jing, Meng; Bowser, Michael T.
2010-01-01
Gradient micro free flow electrophoresis (μFFE) was used to observe the equilibria of DNA aptamers with their targets (IgE or HIVRT) across a range of ligand concentrations. A continuous stream of aptamer was mixed online with an increasing concentration of target and introduced into the μFFE device, which separated ligand-aptamer complexes from the unbound aptamer. The continuous nature of μFFE allowed the equilibrium distribution of aptamer and complex to be measured at 300 discrete target concentrations within 5 minutes. This is a significant improvement in speed and precision over affinity capillary electrophoresis (ACE) assays. The dissociation constant of the aptamer-IgE complex was estimated to be 48± 3 nM. The high coverage across the range of ligand concentrations allowed complex stoichiometries of the aptamer-HIVRT complexes to be observed. Nearly continuous observation of the equilibrium distribution from 0 to 500 nM HIVRT revealed the presence of complexes with 3:1 (aptamer:HIVRT), 2:1 and 1:1 stoichiometries. PMID:20373790
NASA Astrophysics Data System (ADS)
Ahmed, Mousumi
Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Nandy, Maitreyee; Sarkar, P K; Sanami, T; Takada, M; Shibata, T
2016-09-01
Measured neutron energy distribution emitted from a thick stopping target of natural carbon at 0°, 30°, 60° and 90° from nuclear reactions caused by 12 MeV amu -1 incident 12 C 5+ ions were converted to energy differential and total neutron absorbed dose as well as ambient dose equivalent H * (10) using the fluence-to-dose conversion coefficients provided by the ICRP. Theoretical estimates were obtained using the Monte Carlo nuclear reaction model code PACE and a few existing empirical formulations for comparison. Results from the PACE code showed an underestimation of the high-energy part of energy differential dose distributions at forward angles whereas the empirical formulation by Clapier and Zaidins (1983 Nucl. Instrum. Methods 217 489-94) approximated the energy integrated angular distribution of H * (10) satisfactorily. Using the measured data, the neutron doses received by some vital human organs were estimated for anterior-posterior exposure. The estimated energy-averaged quality factors were found to vary for different organs from about 7 to about 13. Emitted neutrons having energies above 20 MeV were found to contribute about 20% of the total dose at 0° while at 90° the contribution was reduced to about 2%.
Rutherford, M J; Abel, G A; Greenberg, D C; Lambert, P C; Lyratzopoulos, G
2015-03-31
Older women with breast cancer have poorer relative survival outcomes, but whether achieving earlier stage at diagnosis would translate to substantial reductions in mortality is uncertain. We analysed data on East of England women with breast cancer (2006-2010) aged 70+ years. We estimated survival for different stage-deprivation-age group strata using both the observed and a hypothetical stage distribution (assuming that all women aged 75+ years acquired the stage distribution of those aged 70-74 years). We subsequently estimated deaths that could be postponed beyond 5 years from diagnosis if women aged 75+ years had the hypothetical stage distribution. We projected findings to the English population using appropriate age and socioeconomic group weights. For a typically sized annual cohort in the East of England, 27 deaths in women with breast cancer aged 75+ years can be postponed within 5 years from diagnosis if their stage distribution matched that of the women aged 70-74 years (4.8% of all 566 deaths within 5 years post diagnosis in this population). Under assumptions, we estimate that the respective number for England would be 280 deaths (5.0% of all deaths within 5 years post diagnosis in this population). The findings support ongoing development of targeted campaigns aimed at encouraging prompt presentation in older women.
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
NASA Astrophysics Data System (ADS)
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q 2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
A new statistical method for design and analyses of component tolerance
NASA Astrophysics Data System (ADS)
Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam
2017-03-01
Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.
NASA Astrophysics Data System (ADS)
Hodille, E. A.; Bernard, E.; Markelj, S.; Mougenot, J.; Becquart, C. S.; Bisson, R.; Grisolia, C.
2017-12-01
Based on macroscopic rate equation simulations of tritium migration in an actively cooled tungsten (W) plasma facing component (PFC) using the code MHIMS (migration of hydrogen isotopes in metals), an estimation has been made of the tritium retention in ITER W divertor target during a non-uniform exponential distribution of particle fluxes. Two grades of materials are considered to be exposed to tritium ions: an undamaged W and a damaged W exposed to fast fusion neutrons. Due to strong temperature gradient in the PFC, Soret effect’s impacts on tritium retention is also evaluated for both cases. Thanks to the simulation, the evolutions of the tritium retention and the tritium migration depth are obtained as a function of the implanted flux and the number of cycles. From these evolutions, extrapolation laws are built to estimate the number of cycles needed for tritium to permeate from the implantation zone to the cooled surface and to quantify the corresponding retention of tritium throughout the W PFC.
Beyond PSInSAR: the SQUEESAR Approach
NASA Astrophysics Data System (ADS)
Ferretti, A.; Novali, F.; Fumagalli, A.; Prati, C.; Rocca, F.; Rucci, A.
2009-12-01
After a decade since the first results on ERS data, Permanent Scatterer (PS) InSAR has become an operational technology for detecting and monitoring slow surface deformation phenomena such as subsidence and uplift, landslides, seismic fault creeping, volcanic inflation, etc. Processing procedures have been continuously updated, but the core of the algorithm has not been changed significantly. As well known, in PSInSAR, the main target is the identification of individual pixels that exhibit a “PS behavior”, i.e. they are only slightly affected by both temporal and geometrical decorrelation. Typically, these scatterers correspond to man-made objects, but PS have been identified also in non-urban areas, where exposed rocks or outcrops can indeed create good radar benchmarks and enable high-quality displacement measurements. Contrary to interferogram stacking techniques, PS analyses are carried out on a pixel-by-pixel basis, with no filtering of the interferograms, in order to preserve phase values from possible incoherent clutter surrounding good radar targets. In fact, any filtering process implies a spatial smoothing of the data that could compromise - rather than improve - phase coherence, at least for isolated PS. Although the PS approach usually allows one to retrieve high quality deformation measurements on a sparse grid of good radar targets, in some datasets it is quite evident how the number of pixels where some information can be extracted could be significantly increased by relaxing the hypothesis on target coherence and searching for pixels where the coherence level is high enough at least in some interferograms of the data-stack, not necessarily all. The idea of computing a “coherence matrix” for each pixel of the area of interest have been already proposed in previous papers, together with a statistical estimation of some physical parameters of interest (e.g. the average displacement rate) based on the covariance matrix. In past publications, however, it was not highlighted how a reliable estimation of the coherence matrix can be carried out on distributed scatterers only, characterized by a sufficient number of looks, sharing the same statistics of the reflectivity values. In this paper, we propose how to estimate reliable coherence values by properly selecting the statistical population used in the estimation. In standard PSInSAR, the so-called amplitude stability index is used as a proxy for temporal phase coherence, here we expand the concept and we show how local amplitude statistics can be successfully exploited to detect distributed scatterers, rather than individual pixels, where reliable statistical parameters can be extracted. As a byproduct of carefully estimating coherence values, we get despeckled amplitude images and filtered interferograms. Coherence matrixes and distributed scatterers, apart from the well-known PS, then become invaluable sources of information that can be “squeezed” to estimate any InSAR parameter of interest (the SqueeSAR concept). Preliminary results on real datasets will be shown using both C-band and X-band SAR data.
Rapid assessment of target species: Byssate bivalves in a large tropical port.
Minchin, Dan; Olenin, Sergej; Liu, Ta-Kang; Cheng, Muhan; Huang, Sheng-Chih
2016-11-15
Rapid assessment sampling for target species is a fast cost-effective method aimed at determining the presence, abundance and distribution of alien and native harmful aquatic organisms and pathogens that may have been introduced by shipping. In this study, the method was applied within a large tropical port expected to have a high species diversity. The port of Kaohsiung was sampled for bivalve molluscan species that attach using a byssus. Such species, due to their biological traits, are spread by ships to ports worldwide. We estimated the abundance and distribution range of one dreissenid (Mytilopsis sallei) and four mytilids (Brachidontes variabilis, Arcuatula senhousa, Mytilus galloprovincialis, Perna viridis) known to be successful invaders and identified as potential pests, or high-risk harmful native or non-native species. We conclude that a rapid assessment of their abundance and distribution within a port, and its vicinity, is efficient and can provide sufficient information for decision making by port managers where IMO port exemptions may be sought. Copyright © 2016. Published by Elsevier Ltd.
Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N.; Meng, Fande
2015-01-01
Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program–FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ‘‘best approach” depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds. PMID:26313561
Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande
2015-01-01
Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.
Statistical Application and Cost Saving in a Dental Survey.
Chyou, Po-Huang; Schroeder, Dixie; Schwei, Kelsey; Acharya, Amit
2017-06-01
To effectively achieve a robust survey response rate in a timely manner, an alternative approach to survey distribution, informed by statistical modeling, was applied to efficiently and cost-effectively achieve the targeted rate of return. A prospective environmental scan surveying adoption of health information technology utilization within their practices was undertaken in a national pool of dental professionals (N=8000) using an alternative method of sampling. The piloted approach to rate of cohort sampling targeted a response rate of 400 completed surveys from among randomly targeted eligible providers who were contacted using replicated subsampling leveraging mailed surveys. Two replicated subsample mailings (n=1000 surveys/mailings) were undertaken to project the true response rate and estimate the total number of surveys required to achieve the final target. Cost effectiveness and non-response bias analyses were performed. The final mailing required approximately 24% fewer mailings compared to targeting of the entire cohort, with a final survey capture exceeding the expected target. An estimated $5000 in cost savings was projected by applying the alternative approach. Non-response analyses found no evidence of bias relative to demographics, practice demographics, or topically-related survey questions. The outcome of this pilot study suggests that this approach to survey studies will accomplish targeted enrollment in a cost effective manner. Future studies are needed to validate this approach in the context of other survey studies. © 2017 Marshfield Clinic.
Statistical Application and Cost Saving in a Dental Survey
Chyou, Po-Huang; Schroeder, Dixie; Schwei, Kelsey; Acharya, Amit
2017-01-01
Objective To effectively achieve a robust survey response rate in a timely manner, an alternative approach to survey distribution, informed by statistical modeling, was applied to efficiently and cost-effectively achieve the targeted rate of return. Design A prospective environmental scan surveying adoption of health information technology utilization within their practices was undertaken in a national pool of dental professionals (N=8000) using an alternative method of sampling. The piloted approach to rate of cohort sampling targeted a response rate of 400 completed surveys from among randomly targeted eligible providers who were contacted using replicated subsampling leveraging mailed surveys. Methods Two replicated subsample mailings (n=1000 surveys/mailings) were undertaken to project the true response rate and estimate the total number of surveys required to achieve the final target. Cost effectiveness and non-response bias analyses were performed. Results The final mailing required approximately 24% fewer mailings compared to targeting of the entire cohort, with a final survey capture exceeding the expected target. An estimated $5000 in cost savings was projected by applying the alternative approach. Non-response analyses found no evidence of bias relative to demographics, practice demographics, or topically-related survey questions. Conclusion The outcome of this pilot study suggests that this approach to survey studies will accomplish targeted enrollment in a cost effective manner. Future studies are needed to validate this approach in the context of other survey studies. PMID:28373286
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach
NASA Astrophysics Data System (ADS)
Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam
2018-03-01
We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.
Causal Methods for Observational Research: A Primer.
Almasi-Hashiani, Amir; Nedjat, Saharnaz; Mansournia, Mohammad Ali
2018-04-01
The goal of many observational studies is to estimate the causal effect of an exposure on an outcome after adjustment for confounders, but there are still some serious errors in adjusting confounders in clinical journals. Standard regression modeling (e.g., ordinary logistic regression) fails to estimate the average effect of exposure in total population in the presence of interaction between exposure and covariates, and also cannot adjust for time-varying confounding appropriately. Moreover, stepwise algorithms of the selection of confounders based on P values may miss important confounders and lead to bias in effect estimates. Causal methods overcome these limitations. We illustrate three causal methods including inverse-probability-of-treatment-weighting (IPTW) and parametric g-formula, with an emphasis on a clever combination of these 2 methods: targeted maximum likelihood estimation (TMLE) which enjoys a double-robust property against bias. © 2018 The Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Vehicle-based Methane Mapping Helps Find Natural Gas Leaks and Prioritize Leak Repairs
NASA Astrophysics Data System (ADS)
von Fischer, J. C.; Weller, Z.; Roscioli, J. R.; Lamb, B. K.; Ferrara, T.
2017-12-01
Recently, mobile methane sensing platforms have been developed to detect and locate natural gas (NG) leaks in urban distribution systems and to estimate their size. Although this technology has already been used in targeted deployment for prioritization of NG pipeline infrastructure repair and replacement, one open question regarding this technology is how effective the resulting data are for prioritizing infrastructure repair and replacement. To answer this question we explore the accuracy and precision of the natural gas leak location and emission estimates provided by methane sensors placed on Google Street View (GSV) vehicles. We find that the vast majority (75%) of methane emitting sources detected by these mobile platforms are NG leaks and that the location estimates are effective at identifying the general location of leaks. We also show that the emission rate estimates from mobile detection platforms are able to effectively rank NG leaks for prioritizing leak repair. Our findings establish that mobile sensing platforms are an efficient and effective tool for improving the safety and reducing the environmental impacts of low-pressure NG distribution systems by reducing atmospheric methane emissions.
Joint reconstruction of multiview compressed images.
Thirumalai, Vijayaraghavan; Frossard, Pascal
2013-05-01
Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.
Sargeant, Glen A.; Sovada, Marsha A.; Slivinski, Christiane C.; Johnson, Douglas H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997–1999, we searched 355 townships (ca. 93 km) 1–3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ≥1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ≥0.65.
Sargeant, G.A.; Sovada, M.A.; Slivinski, C.C.; Johnson, D.H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997-1999, we searched 355 townships (ca. 93 km2) 1-3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of ?? = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ???1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ???0.65.
Attanasi, E.D.; Charpentier, R.R.
2002-01-01
Undiscovered oil and gas assessments are commonly reported as aggregate estimates of hydrocarbon volumes. Potential commercial value and discovery costs are, however, determined by accumulation size, so engineers, economists, decision makers, and sometimes policy analysts are most interested in projected discovery sizes. The lognormal and Pareto distributions have been used to model exploration target sizes. This note contrasts the outcomes of applying these alternative distributions to the play level assessments of the U.S. Geological Survey's 1995 National Oil and Gas Assessment. Using the same numbers of undiscovered accumulations and the same minimum, medium, and maximum size estimates, substitution of the shifted truncated lognormal distribution for the shifted truncated Pareto distribution reduced assessed undiscovered oil by 16% and gas by 15%. Nearly all of the volume differences resulted because the lognormal had fewer larger fields relative to the Pareto. The lognormal also resulted in a smaller number of small fields relative to the Pareto. For the Permian Basin case study presented here, reserve addition costs were 20% higher with the lognormal size assumption. ?? 2002 International Association for Mathematical Geology.
Ellipsoids for anomaly detection in remote sensing imagery
NASA Astrophysics Data System (ADS)
Grosklos, Guenchik; Theiler, James
2015-05-01
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
Harrison, J D; Muirhead, C R
2003-01-01
To compare quantitative estimates of lifetime cancer risk in humans for exposures to internally deposited radionuclides and external radiation. To assess the possibility that risks from radionuclide exposures may be underestimated. Risk estimates following internal exposures can be made for a small number of alpha-particle-emitting nuclides. (1) Lung cancer in underground miners exposed by inhalation to radon-222 gas and its short-lived progeny. Studies of residential (222)Rn exposure are generally consistent with predictions from the miner studies. (2) Liver cancer and leukaemia in patients given intravascular injections of Thorotrast, a thorium-232 oxide preparation that concentrates in liver, spleen and bone marrow. (3) Bone cancer in patients given injections of radium-224, and in workers exposed occupationally to (226)Ra and (228)Ra, mainly by ingestion. (4) Lung cancer in Mayak workers exposed to plutonium-239, mainly by inhalation. Liver and bone cancers were also seen, but the dosimetry is not yet sufficiently good enough to provide quantitative estimates of risks. Comparisons can be made between risk estimates for radiation-induced cancer derived for radionuclide exposure and those derived for the A-bomb survivors, exposed mainly to low-LET (linear energy transfer) external radiation. Data from animal studies, using dogs and rodents, allow comparisons of cancer induction by a range of alpha- and beta-/gamma-emitting radionuclides. They provide information on relative biological effectiveness (RBE), dose-response relationships, dose-rate effects and the location of target cells for different malignancies. For lung and liver cancer, the estimated values of risk per Sv for internal exposure, assuming an RBE for alpha-particles of 20, are reasonably consistent with estimates for external exposure to low-LET radiation. This also applies to bone cancer when risk is calculated on the basis of average bone dose, but consideration of dose to target cells on bone surfaces suggests a low RBE for alpha-particles. Similarly, for leukaemia, the comparison of risks from alpha-irradiation ((232)Th and progeny) and external radiation suggest a low alpha RBE; this conclusion is supported by animal data. Risk estimates for internal exposure are dependent on the assumptions made in calculating dose. Account is taken of the distribution of radionuclides within tissues and the distribution of target cells for cancer induction. For the lungs and liver, the available human and animal data provide support for current assumptions. However, for bone cancer and leukaemia, it may be that changes are required. Bone cancer risk may be best assessed by calculating dose to a 50 micro m layer of marrow adjacent to endosteal (inner) bone surfaces rather than to a single 10 micro m cell layer as currently assumed. Target cells for leukaemia may be concentrated towards the centre of marrow cavities so that the risk of leukaemia from bone-seeking radionuclides, particularly alpha emitters, may be overestimated by the current assumption of uniform distribution of target cells throughout red bone marrow. The lifetime risk estimates considered here for exposure to internally deposited radionuclides and to external radiation are subject to uncertainties, arising from the dosimetric assumptions made, from the quality of cancer incidence and mortality data and from aspects of risk modelling; including variations in baseline rates between populations for some cancer types. Bearing in mind such uncertainties, comparisons of risk estimates for internal emitters and external radiation show good agreement for lung and liver cancers. For leukaemia, the available data suggest that the assumption of an alpha-particle RBE of 20 can result in overestimates of risk. For bone cancer, it also appears that current assumptions will overestimate risks from alpha-particle-emitting nuclides, particularly at low doses.
Historical emissions critical for mapping decarbonization pathways
NASA Astrophysics Data System (ADS)
Majkut, J.; Kopp, R. E.; Sarmiento, J. L.; Oppenheimer, M.
2016-12-01
Policymakers have set a goal of limiting temperature increase from human influence on the climate. This motivates the identification of decarbonization pathways to stabilize atmospheric concentrations of CO2. In this context, the future behavior of CO2 sources and sinks define the CO2 emissions necessary to meet warming thresholds with specified probabilities. We adopt a simple model of the atmosphere-land-ocean carbon balance to reflect uncertainty in how natural CO2 sinks will respond to increasing atmospheric CO2 and temperature. Bayesian inversion is used to estimate the probability distributions of selected parameters of the carbon model. Prior probability distributions are chosen to reflect the behavior of CMIP5 models. We then update these prior distributions by running historical simulations of the global carbon cycle and inverting with observationally-based inventories and fluxes of anthropogenic carbon in the ocean and atmosphere. The result is a best-estimate of historical CO2 sources and sinks and a model of how CO2 sources and sinks will vary in the future under various emissions scenarios, with uncertainty. By linking the carbon model to a simple climate model, we calculate emissions pathways and carbon budgets consistent with meeting specific temperature thresholds and identify key factors that contribute to remaining uncertainty. In particular, we show how the assumed history of CO2 emissions from land use change (LUC) critically impacts estimates of the strength of the land CO2 sink via CO2 fertilization. Different estimates of historical LUC emissions taken from the literature lead to significantly different parameterizations of the carbon system. High historical CO2 emissions from LUC lead to a more robust CO2 fertilization effect, significantly lower future atmospheric CO2 concentrations, and an increased amount of CO2 that can be emitted to satisfy temperature stabilization targets. Thus, in our model, historical LUC emissions have a significant impact on allowable carbon budgets under temperture targets.
Liu, Cheng; Li, Shiying; Gu, Yanjuan; Xiong, Huahua; Wong, Wing-Tak; Sun, Lei
2018-05-07
Tumor proteases have been recognized as significant regulators in the tumor microenvironment, but the current strategies for in vivo protease imaging have tended to focus on the development of a probe design rather than the investigation of a novel imaging strategy by leveraging the imaging technique and probe. Herein, it is the first report to investigate the ability of multispectral photoacoustic imaging (PAI) to estimate the distribution of protease cleavage sites inside living tumor tissue by using an activatable photoacoustic (PA) probe. The protease MMP-2 is selected as the target. In this probe, gold nanocages (GNCs) with an absorption peak at ~ 800 nm and fluorescent dye molecules with an absorption peak at ~ 680 nm are conjugated via a specific enzymatic peptide substrate. Upon enzymatic activation by MMP-2, the peptide substrate is cleaved and the chromophores are released. Due to the different retention speeds of large GNCs and small dye molecules, the probe alters its intrinsic absorption profile and produces a distinct change in the PA signal. A multispectral PAI technique that can distinguish different chromophores based on intrinsic PA spectral signatures is applied to estimate the signal composition changes and indicate the cleavage interaction sites. Finally, the multispectral PAI technique with the activatable probe is tested in solution, cultured cells, and a subcutaneous tumor model in vivo. Our experiment in solution with enzyme ± inhibitor, cell culture ± inhibitor, and in vivo tumor model with administration of the developed probe ± inhibitor demonstrated the probe was cleaved by the targeted enzyme. Particularly, the in vivo estimation of the cleavage site distribution was validated with the result of ex vivo immunohistochemistry analysis. This novel synergy of the multispectral PAI technique and the activatable probe is a potential strategy for the distribution estimation of tumor protease activity in vivo.
Estimating the effectiveness of further sampling in species inventories
Keating, K.A.; Quinn, J.F.; Ivie, M.A.; Ivie, L.L.
1998-01-01
Estimators of the number of additional species expected in the next ??n samples offer a potentially important tool for improving cost-effectiveness of species inventories but are largely untested. We used Monte Carlo methods to compare 11 such estimators, across a range of community structures and sampling regimes, and validated our results, where possible, using empirical data from vascular plant and beetle inventories from Glacier National Park, Montana, USA. We found that B. Efron and R. Thisted's 1976 negative binomial estimator was most robust to differences in community structure and that it was among the most accurate estimators when sampling was from model communities with structures resembling the large, heterogeneous communities that are the likely targets of major inventory efforts. Other estimators may be preferred under specific conditions, however. For example, when sampling was from model communities with highly even species-abundance distributions, estimates based on the Michaelis-Menten model were most accurate; when sampling was from moderately even model communities with S=10 species or communities with highly uneven species-abundance distributions, estimates based on Gleason's (1922) species-area model were most accurate. We suggest that use of such methods in species inventories can help improve cost-effectiveness by providing an objective basis for redirecting sampling to more-productive sites, methods, or time periods as the expectation of detecting additional species becomes unacceptably low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flampouri, S; Li, Z; Hoppe, B
2015-06-15
Purpose: To develop a treatment planning method for passively-scattered involved-node proton therapy of mediastinal lymphoma robust to breathing and cardiac motions. Methods: Beam-specific planning treatment volumes (bsPTV) are calculated for each proton field to incorporate pertinent uncertainties. Geometric margins are added laterally to each beam while margins for range uncertainty due to setup errors, breathing, and calibration curve uncertainties are added along each beam. The calculation of breathing motion and deformation effects on proton range includes all 4DCT phases. The anisotropic water equivalent margins are translated to distances on average 4DCT. Treatment plans are designed so each beam adequately coversmore » the corresponding bsPTV. For targets close to the heart, cardiac motion effects on dosemaps are estimated by using a library of anonymous ECG-gated cardiac CTs (cCT). The cCT, originally contrast-enhanced, are partially overridden to allow meaningful proton dose calculations. Targets similar to the treatment targets are drawn on one or more cCT sets matching the anatomy of the patient. Plans based on the average cCT are calculated on individual phases, then deformed to the average and accumulated. When clinically significant dose discrepancies occur between planned and accumulated doses, the patient plan is modified to reduce the cardiac motion effects. Results: We found that bsPTVs as planning targets create dose distributions similar to the conventional proton planning distributions, while they are a valuable tool for visualization of the uncertainties. For large targets with variability in motion and depth, integral dose was reduced because of the anisotropic margins. In most cases, heart motion has a clinically insignificant effect on target coverage. Conclusion: A treatment planning method was developed and used for proton therapy of mediastinal lymphoma. The technique incorporates bsPTVs compensating for all common sources of uncertainties and estimation of the effects of cardiac motion not commonly performed.« less
Nuclear collective flow and charged-pion emission in Ne-nucleus collisions at E/A = 800 MeV
NASA Technical Reports Server (NTRS)
Gosset, J.; Valette, O.; Babinet, R.; Alard, J. P.; Augerat, J.
1989-01-01
Triple-differential cross sections of charged pions were measured for collisions of Ne projectiles at E/A = 800 MeV with NaF, Nb, and Pb targets. The reaction plane was estimated event by event from the light-baryon momentum distribution. For heavy targets, preferential emission of charged pions away from the interaction zone toward the projectile side was observed in the transverse direction. Such a preferential emission, which is not predicted by cascade calculations, may be attributed to a stronger pion absorption by the heavier spectator remnant.
Nuclear collective flow and charged-pion emission in Ne-nucleus collisions at E/A = 800 MeV
NASA Technical Reports Server (NTRS)
Gosset, J.; Valette, O.; Alard, J. P.; Augerat, J.; Babinet, R.; Bastid, N.; Brochard, F.; De Marco, N.; Dupieux, P.; Fodor, Z.;
1989-01-01
Triple-differential cross sections of charged pions were measured for collisions of Ne projectiles at E/A = 800 MeV with NaF, Nb, and Pb targets. The reaction plane was estimated event by event from the light-baryon momentum distribution. For heavy targets, preferential emission of charged pions away from the interaction zone towards the projectile side was observed in the transverse direction. Such a preferential emission, which is not predicted by cascade calculations, may be attributed to a stronger pion absorption by the heavier spectator remnant.
Simons, Emily; Ferrari, Matthew; Fricks, John; Wannemuehler, Kathleen; Anand, Abhijeet; Burton, Anthony; Strebel, Peter
2012-06-09
In 2008 all WHO member states endorsed a target of 90% reduction in measles mortality by 2010 over 2000 levels. We developed a model to estimate progress made towards this goal. We constructed a state-space model with population and immunisation coverage estimates and reported surveillance data to estimate annual national measles cases, distributed across age classes. We estimated deaths by applying age-specific and country-specific case-fatality ratios to estimated cases in each age-country class. Estimated global measles mortality decreased 74% from 535,300 deaths (95% CI 347,200-976,400) in 2000 to 139,300 (71,200-447,800) in 2010. Measles mortality was reduced by more than three-quarters in all WHO regions except the WHO southeast Asia region. India accounted for 47% of estimated measles mortality in 2010, and the WHO African region accounted for 36%. Despite rapid progress in measles control from 2000 to 2007, delayed implementation of accelerated disease control in India and continued outbreaks in Africa stalled momentum towards the 2010 global measles mortality reduction goal. Intensified control measures and renewed political and financial commitment are needed to achieve mortality reduction targets and lay the foundation for future global eradication of measles. US Centers for Disease Control and Prevention (PMS 5U66/IP000161). Copyright © 2012 Elsevier Ltd. All rights reserved.
Small area variation in diabetes prevalence in Puerto Rico.
Tierney, Edward F; Burrows, Nilka R; Barker, Lawrence E; Beckles, Gloria L; Boyle, James P; Cadwell, Betsy L; Kirtland, Karen A; Thompson, Theodore J
2013-06-01
To estimate the 2009 prevalence of diagnosed diabetes in Puerto Rico among adults ≥ 20 years of age in order to gain a better understanding of its geographic distribution so that policymakers can more efficiently target prevention and control programs. A Bayesian multilevel model was fitted to the combined 2008-2010 Behavioral Risk Factor Surveillance System and 2009 United States Census data to estimate diabetes prevalence for each of the 78 municipios (counties) in Puerto Rico. The mean unadjusted estimate for all counties was 14.3% (range by county, 9.9%-18.0%). The average width of the confidence intervals was 6.2%. Adjusted and unadjusted estimates differed little. These 78 county estimates are higher on average and showed less variability (i.e., had a smaller range) than the previously published estimates of the 2008 diabetes prevalence for all United States counties (mean, 9.9%; range, 3.0%-18.2%).
NASA Astrophysics Data System (ADS)
Suyama, Taku; Bae, Hansin; Setaka, Kenta; Ogawa, Hayato; Fukuoka, Yushi; Suzuki, Haruka; Toyoda, Hirotaka
2017-11-01
O- ion flux from the indium tin oxide (ITO) sputter target under Ar ion bombardment is quantitatively evaluated using a calorimetry method. Using a mass spectrometer with an energy analyzer, O- energy distribution is measured with spatial dependence. Directional high-energy O- ion ejected from the target surface is observed. Using a calorimetry method, localized heat flux originated from high-energy O- ion is measured. From absolute evaluation of the heat flux from O- ion, O- particle flux in order of 1018 m-2 s-1 is evaluated at a distance of 10 cm from the target. Production yield of O- ion on the ITO target by one Ar+ ion impingement at a kinetic energy of 244 eV is estimated to be 3.3 × 10-3 as the minimum value.
Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A.; Trofimov, Alexei
2013-01-01
Purpose: Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. Methods: For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. Results: IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. Conclusions: In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled. PMID:23635256
Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A; Trofimov, Alexei
2013-05-01
Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled.
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
Modelling the geographical distribution of soil-transmitted helminth infections in Bolivia.
Chammartin, Frédérique; Scholte, Ronaldo G C; Malone, John B; Bavia, Mara E; Nieto, Prixia; Utzinger, Jürg; Vounatsou, Penelope
2013-05-25
The prevalence of infection with the three common soil-transmitted helminths (i.e. Ascaris lumbricoides, Trichuris trichiura, and hookworm) in Bolivia is among the highest in Latin America. However, the spatial distribution and burden of soil-transmitted helminthiasis are poorly documented. We analysed historical survey data using Bayesian geostatistical models to identify determinants of the distribution of soil-transmitted helminth infections, predict the geographical distribution of infection risk, and assess treatment needs and costs in the frame of preventive chemotherapy. Rigorous geostatistical variable selection identified the most important predictors of A. lumbricoides, T. trichiura, and hookworm transmission. Results show that precipitation during the wettest quarter above 400 mm favours the distribution of A. lumbricoides. Altitude has a negative effect on T. trichiura. Hookworm is sensitive to temperature during the coldest month. We estimate that 38.0%, 19.3%, and 11.4% of the Bolivian population is infected with A. lumbricoides, T. trichiura, and hookworm, respectively. Assuming independence of the three infections, 48.4% of the population is infected with any soil-transmitted helminth. Empirical-based estimates, according to treatment recommendations by the World Health Organization, suggest a total of 2.9 million annualised treatments for the control of soil-transmitted helminthiasis in Bolivia. We provide estimates of soil-transmitted helminth infections in Bolivia based on high-resolution spatial prediction and an innovative variable selection approach. However, the scarcity of the data suggests that a national survey is required for more accurate mapping that will govern spatial targeting of soil-transmitted helminthiasis control.
Counting Raindrops and the Distribution of Intervals Between Them.
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.
2017-12-01
Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
Foreground effect on the J-factor estimation of ultra-faint dwarf spheroidal galaxies
NASA Astrophysics Data System (ADS)
Ichikawa, Koji; Horigome, Shun-ichi; Ishigaki, Miho N.; Matsumoto, Shigeki; Ibe, Masahiro; Sugai, Hajime; Hayashi, Kohei
2018-05-01
Dwarf spheroidal galaxies (dSphs) are promising targets for the gamma-ray dark matter (DM) search. In particular, DM annihilation signal is expected to be strong in some of the recently discovered nearby ultra-faint dSphs, which potentially give stringent constraints on the O(1) TeV WIMP DM. However, various non-negligible systematic uncertainties complicate the estimation of the astrophysical factors relevant for the DM search in these objects. Among them, the effects of foreground stars particularly attract attention because the contamination is unavoidable even for the future kinematical survey. In this article, we assess the effects of the foreground contamination on the astrophysical J-factor estimation by generating mock samples of stars in the four ultra-faint dSphs and using a model of future spectrographs. We investigate various data cuts to optimize the quality of the data and apply a likelihood analysis which takes member and foreground stellar distributions into account. We show that the foreground star contaminations in the signal region (the region of interest) and their statistical uncertainty can be estimated by interpolating the foreground star distribution in the control region where the foreground stars dominate the member stars. Such regions can be secured at future spectroscopic observations utilizing a multiple object spectrograph with a large field of view; e.g. the Prime Focus Spectrograph mounted on Subaru Telescope. The above estimation has several advantages: The data-driven estimation of the contamination makes the analysis of the astrophysical factor stable against the complicated foreground distribution. Besides, foreground contamination effect is considered in the likelihood analysis.
Dose-volume histogram prediction using density estimation.
Skarpman Munter, Johanna; Sjölund, Jens
2015-09-07
Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.
Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs
NASA Astrophysics Data System (ADS)
Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.
2016-07-01
Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
NASA Technical Reports Server (NTRS)
Edwards, David L.; Cooke, William; Scruggs, Rob; Moser, Danielle E.
2008-01-01
The National Aeronautics and Space Administration (NASA) is progressing toward long-term lunar habitation. Critical to the design of a lunar habitat is an understanding of the lunar surface environment; of specific importance is the primary meteoroid and subsequent ejecta environment. The document, NASA SP-8013, was developed for the Apollo program and is the latest definition of the ejecta environment. There is concern that NASA SP-8013 may over-estimate the lunar ejecta environment. NASA's Meteoroid Environment Office (MEO) has initiated several tasks to improve the accuracy of our understanding of the lunar surface ejecta environment. This paper reports the results of experiments on projectile impact into powered pumice and unconsolidated JSC-1A Lunar Mare Regolith stimulant (JSC-1A) targets. The Ames Vertical Gun Range (AVGR) was used to accelerate projectiles to velocities in excess of 5 km/s and impact the targets at normal incidence. The ejected particles were detected by thin aluminum foil targets placed around the impact site and angular distributions were determined for ejecta. Comparison of ejecta angular distribution with previous works will be presented. A simplistic technique to characterize the ejected particles was formulated and improvements to this technique will be discussed for implementation in future tests.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Estimation of treatment effect in a subpopulation: An empirical Bayes approach.
Shen, Changyu; Li, Xiaochun; Jeong, Jaesik
2016-01-01
It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.
Stimulus-specific variability in color working memory with delayed estimation.
Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Wilson, Colin; Flombaum, Jonathan I
2014-04-08
Working memory for color has been the central focus in an ongoing debate concerning the structure and limits of visual working memory. Within this area, the delayed estimation task has played a key role. An implicit assumption in color working memory research generally, and delayed estimation in particular, is that the fidelity of memory does not depend on color value (and, relatedly, that experimental colors have been sampled homogeneously with respect to discriminability). This assumption is reflected in the common practice of collapsing across trials with different target colors when estimating memory precision and other model parameters. Here we investigated whether or not this assumption is secure. To do so, we conducted delayed estimation experiments following standard practice with a memory load of one. We discovered that different target colors evoked response distributions that differed widely in dispersion and that these stimulus-specific response properties were correlated across observers. Subsequent experiments demonstrated that stimulus-specific responses persist under higher memory loads and that at least part of the specificity arises in perception and is eventually propagated to working memory. Posthoc stimulus measurement revealed that rendered stimuli differed from nominal stimuli in both chromaticity and luminance. We discuss the implications of these deviations for both our results and those from other working memory studies.
Tajiri, Shinya; Tashiro, Mutsumi; Mizukami, Tomohiro; Tsukishima, Chihiro; Torikoshi, Masami; Kanai, Tatsuaki
2017-11-01
Carbon-ion therapy by layer-stacking irradiation for static targets has been practised in clinical treatments. In order to apply this technique to a moving target, disturbances of carbon-ion dose distributions due to respiratory motion have been studied based on the measurement using a respiratory motion phantom, and the margin estimation given by the square root of the summation Internal margin2+Setup margin2 has been assessed. We assessed the volume in which the variation in the ratio of the dose for a target moving due to respiration relative to the dose for a static target was within 5%. The margins were insufficient for use with layer-stacking irradiation of a moving target, and an additional margin was required. The lateral movement of a target converts to the range variation, as the thickness of the range compensator changes with the movement of the target. Although the additional margin changes according to the shape of the ridge filter, dose uniformity of 5% can be achieved for a spherical target 93 mm in diameter when the upward range variation is limited to 5 mm and the additional margin of 2.5 mm is applied in case of our ridge filter. Dose uniformity in a clinical target largely depends on the shape of the mini-peak as well as on the bolus shape. We have shown the relationship between range variation and dose uniformity. In actual therapy, the upper limit of target movement should be considered by assessing the bolus shape. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
Severgnini, Mara; de Denaro, Mario; Bortul, Marina; Vidali, Cristiana; Beorchia, Aulo
2014-01-08
Intraoperative electron radiation therapy (IOERT) cannot usually benefit, as conventional external radiotherapy, from software systems of treatment planning based on computed tomography and from common dose verify procedures. For this reason, in vivo film dosimetry (IVFD) proves to be an effective methodology to evaluate the actual radiation dose delivered to the target. A practical method for IVFD during breast IOERT was carried out to improve information on the dose actually delivered to the tumor target and on the alignment of the shielding disk with respect to the electron beam. Two EBT3 GAFCHROMIC films have been positioned on the two sides of the shielding disk in order to obtain the dose maps at the target and beyond the disk. Moreover the postprocessing analysis of the dose distribution measured on the films provides a quantitative estimate of the misalignment between the collimator and the disk. EBT3 radiochromic films have been demonstrated to be suitable dosimeters for IVD due to their linear dose-optical density response in a narrow range around the prescribed dose, as well as their capability to be fixed to the shielding disk without giving any distortion in the dose distribution. Off-line analysis of the radiochromic film allowed absolute dose measurements and this is indeed a very important verification of the correct exposure to the target organ, as well as an estimate of the dose to the healthy tissue underlying the shielding. These dose maps allow surgeons and radiation oncologists to take advantage of qualitative and quantitative feedback for setting more accurate treatment strategies and further optimized procedures. The proper alignment using elastic bands has improved the absolute dose accuracy and the collimator disk alignment by more than 50%.
de Denaro, Mario; Bortul, Marina; Vidali, Cristiana; Beorchia, Aulo
2014-01-01
Intraoperative electron radiation therapy (IOERT) cannot usually benefit, as conventional external radiotherapy, from software systems of treatment planning based on computed tomography and from common dose verify procedures. For this reason, in vivo film dosimetry (IVFD) proves to be an effective methodology to evaluate the actual radiation dose delivered to the target. A practical method for IVFD during breast IOERT was carried out to improve information on the dose actually delivered to the tumor target and on the alignment of the shielding disk with respect to the electron beam. Two EBT3 GAFCHROMIC films have been positioned on the two sides of the shielding disk in order to obtain the dose maps at the target and beyond the disk. Moreover the postprocessing analysis of the dose distribution measured on the films provides a quantitative estimate of the misalignment between the collimator and the disk. EBT3 radiochromic films have been demonstrated to be suitable dosimeters for IVD due to their linear dose‐optical density response in a narrow range around the prescribed dose, as well as their capability to be fixed to the shielding disk without giving any distortion in the dose distribution. Off‐line analysis of the radiochromic film allowed absolute dose measurements and this is indeed a very important verification of the correct exposure to the target organ, as well as an estimate of the dose to the healthy tissue underlying the shielding. These dose maps allow surgeons and radiation oncologists to take advantage of qualitative and quantitative feedback for setting more accurate treatment strategies and further optimized procedures. The proper alignment using elastic bands has improved the absolute dose accuracy and the collimator disk alignment by more than 50%. PACS number: 87.55.kh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Fenton; Johnson, Gary E.; Weiland, Mark A.
2010-07-31
This report presents the results of an evaluation of overwintering summer steelhead (Oncorhynchus mykiss) fallback and early out-migrating steelhead kelts downstream passage at The Dalles Dam (TDA) sluiceway and turbines during fall/winter 2009 through early spring 2010. The study was conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Army Corps of Engineers, Portland District (USACE). The goal of this study was to characterize adult steelhead spatial and temporal distributions and passage rates at the sluiceway and turbines for fisheries managers and engineers to use in decision-making relative to sluiceway operations. The study was from November 1, 2009more » to April 10, 2010. The study was divided into three study periods: Period 1, November 1 - December 15, 2009 for a fall/winter sluiceway and turbine study; Period 2, December 16, 2009 - February 28, 2010 for a turbine only study; Period 3, March 1 - April 10, 2010 for a spring sluiceway and turbine study. Sluiceway operations were scheduled to begin on March 1 for this study; however, because of an oil spill cleanup near the sluice outfall, sluiceway operations were delayed until March 8, 2010, therefore the spring study period did not commence until March 8. The study objectives were to (1) estimate the number and distribution of overwintering summer steelhead fallbacks and kelt-sized acoustic targets passing into the sluiceway and turbines at TDA between November 1 and December 15, 2009 and March 1 and April 10, 2010, and (2) estimate the numbers and distribution of adult steelhead and kelt-sized targets passing into turbine units between December 16, 2009 and February 28, 2010. We obtained fish passage data using fixed-location hydroacoustics. For Period 1, overwintering summer steelhead fallback occurred throughout the 45-day study period. A total of 879 {+-} 165 (95% CI) steelhead targets passed through the powerhouse and sluiceway during November 1 to December 15, 2009. Ninety two percent of these fish passed through the sluiceway. Run timing peaked in early December, but fish continued to pass the dam until the end of the study. Horizontal distribution data indicated that Sluice 1 is the preferred route for these fish during fallback through the dam. Diel distribution for steelhead was variable with no apparent distinct patterns. For Period 2, adult steelhead passage occurred on January 14 and 31 and February 2, 22, and 24. A total of 62 {+-} 40 (95% CI) steelhead targets passed through the powerhouse intakes during December 16, 2009 to March 7, 2010. Horizontal distribution data indicated turbine unit 18 passed the majority of fish. Fish passage occurred during morning periods. Passage did not occur during afternoon or nighttime. For Period 3, the early spring study period, overwintering summer steelhead and early out-migrating steelhead kelt downstream passage occurred throughout the 34-day study period. A total of 1,985 {+-} 234 (95% CI) kelt-size targets were estimated to have passed through the powerhouse sluiceway. Ninety-nine percent of these fish passed through the sluiceway. Run timing peaked in late March and again in early April. Horizontal distribution indicated that Sluice 1 is the preferred route for these adult salmonids as they migrate downstream through the dam. Diel distribution for steelhead was variable with no apparent distinct patterns. The results of this study strongly suggest that operating the TDA sluiceway for steelhead passage (fallbacks and kelts) during the late fall, winter, and early spring months will provide an optimal, non-turbine route for these fishes to pass the dam.« less
Estimating the Stoichiometry of HIV Neutralization
Magnus, Carsten; Regoes, Roland R.
2010-01-01
HIV-1 virions infect target cells by first establishing contact between envelope glycoprotein trimers on the virion's surface and CD4 receptors on a target cell, recruiting co-receptors, fusing with the cell membrane and finally releasing the genetic material into the target cell. Specific experimental setups allow the study of the number of trimer-receptor-interactions needed for infection, i.e., the stoichiometry of entry and also the number of antibodies needed to prevent one trimer from engaging successfully in the entry process, i.e., the stoichiometry of (trimer) neutralization. Mathematical models are required to infer the stoichiometric parameters from these experimental data. Recently, we developed mathematical models for the estimations of the stoichiometry of entry [1]. In this article, we show how our models can be extended to investigate the stoichiometry of trimer neutralization. We study how various biological parameters affect the estimate of the stoichiometry of neutralization. We find that the distribution of trimer numbers—which is also an important determinant of the stoichiometry of entry—influences the estimated value of the stoichiometry of neutralization. In contrast, other parameters, which characterize the experimental system, diminish the information we can extract from the data about the stoichiometry of neutralization, and thus reduce our confidence in the estimate. We illustrate the use of our models by re-analyzing previously published data on the neutralization sensitivity [2], which contains measurements of neutralization sensitivity of viruses with different envelope proteins to antibodies with various specificities. Our mathematical framework represents the formal basis for the estimation of the stoichiometry of neutralization. Together with the stoichiometry of entry, the stoichiometry of trimer neutralization will allow one to calculate how many antibodies are required to neutralize a virion or even an entire population of virions. PMID:20333245
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Ponnambalam, L; Samavedham, L; Lee, H R; Ho, C S
2012-05-01
The recent outbreak of H1N1 has provided the scientific community with a sad but timely opportunity to understand the influence of socioeconomic determinants on H1N1 pandemic mortality. To this end, we have used data collected from 341 US counties to model H1N1 deaths/1000 using 12 socioeconomic predictors to discover why certain counties reported fewer H1N1 deaths compared to other counties. These predictors were then used to build a decision tree. The decision tree developed was then used to predict H1N1 mortality for the whole of the USA. Our estimate of 7667 H1N1 deaths are in accord with the lower bound of the CDC estimate of 8870 deaths. In addition to the H1N1 death estimates, we have listed possible counties to be targeted for health-related interventions. The respective state/county authorities can use these results as the basis to target and optimize the distribution of public health resources.
Search strategy in a complex and dynamic environment (the Indian Ocean case)
NASA Astrophysics Data System (ADS)
Loire, Sophie; Arbabi, Hassan; Clary, Patrick; Ivic, Stefan; Crnjaric-Zic, Nelida; Macesic, Senka; Crnkovic, Bojan; Mezic, Igor; UCSB Team; Rijeka Team
2014-11-01
The disappearance of Malaysia Airlines Flight 370 (MH370) in the early morning hours of 8 March 2014 has exposed the disconcerting lack of efficient methods for identifying where to look and how to look for missing objects in a complex and dynamic environment. The search area for plane debris is a remote part of the Indian Ocean. Searches, of the lawnmower type, have been unsuccessful so far. Lagrangian kinematics of mesoscale features are visible in hypergraph maps of the Indian Ocean surface currents. Without a precise knowledge of the crash site, these maps give an estimate of the time evolution of any initial distribution of plane debris and permits the design of a search strategy. The Dynamic Spectral Multiscale Coverage search algorithm is modified to search a spatial distribution of targets that is evolving with time following the dynamic of ocean surface currents. Trajectories are generated for multiple search agents such that their spatial coverage converges to the target distribution. Central to this DSMC algorithm is a metric for the ergodicity.
Ha, Min-Jae
2018-01-01
This study presents a regional oil spill risk assessment and capacities for marine oil spill response in Korea. The risk assessment of oil spill is carried out using both causal factors and environmental/economic factors. The weight of each parameter is calculated using the Analytic Hierarchy Process (AHP). Final regional risk degrees of oil spill are estimated by combining the degree and weight of each existing parameter. From these estimated risk levels, oil recovery capacities were determined with reference to the recovery target of 7500kl specified in existing standards. The estimates were deemed feasible, and provided a more balanced distribution of resources than existing capacities set according to current standards. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.
2011-02-01
Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.
Leslie, Jacqueline; Garba, Amadou; Oliva, Elisa Bosque; Barkire, Arouna; Tinni, Amadou Aboubacar; Djibo, Ali; Mounkaila, Idrissa; Fenwick, Alan
2011-10-01
In 2004 Niger established a large scale schistosomiasis and soil-transmitted helminths control programme targeting children aged 5-14 years and adults. In two years 4.3 million treatments were delivered in 40 districts using school based and community distribution. Four districts were surveyed in 2006 to estimate the economic cost per district, per treatment and per schistosomiasis infection averted. The study compares the costs of treatment at start up and in a subsequent year, identifies the allocation of costs by activity, input and organisation, and assesses the cost of treatment. The cost of delivery provided by teachers is compared to cost of delivery by community distributers (CDD). The total economic cost of the programme including programmatic, national and local government costs and international support in four study districts, over two years, was US$ 456,718; an economic cost/treatment of $0.58. The full economic delivery cost of school based treatment in 2005/06 was $0.76, and for community distribution was $0.46. Including only the programme costs the figures are $0.47 and $0.41 respectively. Differences at sub-district are more marked. This is partly explained by the fact that a CDD treats 5.8 people for every one treated in school. The range in cost effectiveness for both direct and direct and indirect treatments is quantified and the need to develop and refine such estimates is emphasised. The relative cost effectiveness of school and community delivery differs by country according to the composition of the population treated, the numbers targeted and treated at school and in the community, the cost and frequency of training teachers and CDDs. Options analysis of technical and implementation alternatives including a financial analysis should form part of the programme design process.
Statistical inference on censored data for targeted clinical trials under enrichment design.
Chen, Chen-Fang; Lin, Jr-Rung; Liu, Jen-Pei
2013-01-01
For the traditional clinical trials, inclusion and exclusion criteria are usually based on some clinical endpoints; the genetic or genomic variability of the trial participants are not totally utilized in the criteria. After completion of the human genome project, the disease targets at the molecular level can be identified and can be utilized for the treatment of diseases. However, the accuracy of diagnostic devices for identification of such molecular targets is usually not perfect. Some of the patients enrolled in targeted clinical trials with a positive result for the molecular target might not have the specific molecular targets. As a result, the treatment effect may be underestimated in the patient population truly with the molecular target. To resolve this issue, under the exponential distribution, we develop inferential procedures for the treatment effects of the targeted drug based on the censored endpoints in the patients truly with the molecular targets. Under an enrichment design, we propose using the expectation-maximization algorithm in conjunction with the bootstrap technique to incorporate the inaccuracy of the diagnostic device for detection of the molecular targets on the inference of the treatment effects. A simulation study was conducted to empirically investigate the performance of the proposed methods. Simulation results demonstrate that under the exponential distribution, the proposed estimator is nearly unbiased with adequate precision, and the confidence interval can provide adequate coverage probability. In addition, the proposed testing procedure can adequately control the size with sufficient power. On the other hand, when the proportional hazard assumption is violated, additional simulation studies show that the type I error rate is not controlled at the nominal level and is an increasing function of the positive predictive value. A numerical example illustrates the proposed procedures. Copyright © 2013 John Wiley & Sons, Ltd.
Statistical Inference for Data Adaptive Target Parameters.
Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J
2016-05-01
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Fenton; Johnson, Gary E.; Weiland, Mark A.
2009-09-01
This report presents the results of an evaluation of overwintering summer steelhead (Oncorhynchus mykiss) fallback and early out-migrating steelhead kelts downstream passage at The Dalles Dam (TDA) sluiceway and turbines during fall/winter 2008 and early spring 2009, respectively. The study was conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Army Corps of Engineers, Portland District (USACE). Operating the sluiceway reduces the potential for hydropower production. However, this surface flow outlet may be the optimal non-turbine route for fallbacks in late fall after the sluiceway is typically closed for juvenile fish passage and for overwintering summer steelhead andmore » kelt passage in the early spring before the start of the voluntary spill season. The goal of this study was to characterize adult steelhead spatial and temporal distributions and passage rates at the sluiceway and turbines, and their movements in front of the sluiceway at TDA to inform fisheries managers’ and engineers’ decision-making relative to sluiceway operations. The study periods were from November 1 to December 15, 2008 (45 days) and from March 1 to April 9, 2009 (40 days). The study objectives were to 1) estimate the number and distribution of overwintering summer steelhead fallbacks and kelt-sized acoustic targets passing into the sluiceway and turbines at TDA during the two study periods, respectively, and 2) assess the behavior of these fish in front of sluice entrances. We obtained fish passage data using fixed-location hydroacoustics and fish behavior data using acoustic imaging. For the overwintering summer steelhead, fallback occurred throughout the 45-day study period. We estimated that a total of 1790 ± 250 (95% confidence interval) summer steelhead targets passed through the powerhouse intakes and operating sluices during November 1 to December 15, 2008. Ninety five percent of these fish passed through the sluiceway. Therefore, without the sluiceway as a route through the dam, a number of steelhead may have fallen back through turbines. Run timing peaked in late November, but fish continued to pass the dam until the end of the study. Horizontal distribution data indicated that sluice 1 is the preferred route for these fish during fallback through the dam. Diel distribution for overwintering steelhead fallbacks was variable with no apparent distinct patterns. Therefore, sluiceway operations should not be based on diel distribution. For the early spring study, overwintering summer steelhead and early out-migrating steelhead kelt downstream passage occurred throughout the 40-day study period. A total of 1766 ± 277 (95% confidence interval) kelt-size targets were estimated to have passed through the powerhouse intakes and operating sluices. Ninety five percent of these fish passed through the sluiceway. Therefore, as with steelhead fallback, not having the sluiceway as a route through the dam, a number of overwintering steelhead and kelts may use the turbines for downstream passage before the start of the spill season. Run timing peaked in late March; however, relatively large numbers of kelt-sized targets passed the dam on March 2 and March 6 (162 and 188 fish, respectively). Horizontal distribution indicated that sluice 1 is the preferred route for these adult salmonids as they migrate downstream through the dam. Again, no clear pattern was seen for diel distribution of overwintering steelhead and early out-migrating kelt passage.« less
On Algorithms for Generating Computationally Simple Piecewise Linear Classifiers
1989-05-01
suffers. - Waveform classification, e.g. speech recognition, seismic analysis (i.e. discrimination between earthquakes and nuclear explosions), target...assuming Gaussian distributions (B-G) d) Bayes classifier with probability densities estimated with the k-N-N method (B- kNN ) e) The -arest neighbour...range of classifiers are chosen including a fast, easy computable and often used classifier (B-G), reliable and complex classifiers (B- kNN and NNR
Transfer products from the reactions of heavy ions with heavy nuclei. [394 to 1156 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, K.E. III
1979-11-01
Production of nuclides heavier than the target from /sup 86/Kr- and /sup 136/Xe-induced reactions with /sup 181/Ta and /sup 238/U was investigated. Attempts were made to produce new neutron-excess Np and Pu isotopes by the deep inelastic mechanism. No evidence was found for /sup 242/Np or /sup 247/Pu. Estimates were made for the production of /sup 242/Np, /sup 247/Pu, and /sup 248/Am from heavy-ion reactions with uranium targets. Comparisons of reactions of /sup 86/Kr and /sup 136/Xe ions with thick /sup 181/Ta targets and /sup 86/Kr, /sup 136/Xe and /sup 238/U ions with thick /sup 238/U targets indicate that themore » most probable products are not dependent on the projectile. The most probable products can be predicted by the equation Z - Z/sub target/ = 0.43 (A - A/sub target/) + 1.0. The major effect of the projectile is the magnitude of the production cross section of the heavy products. Based on these results, estimates are made of the most probable mass of element 114 produced from heavy-ion reactions with /sup 248/Cm and /sup 254/Es targets. These estimates give the mass number of element 114 as approx. 287 if produced in heavy-ion reactions with these very heavy targets. Excitation functions of gold and bismuth isotopes arising from /sup 86/Kr- and /sup 136/Xe-induced reactions with thin /sup 181/Ta targets were measured. These results indicate that the shape and location (in Z and A above the target) of the isotopic distributions are not strongly dependent on the projectile incident energy. Also, the nuclidic cross sections are found to increase with an increase in projectile energy to a maximum at approximately 1.4 to 1.5 times the Coulomb barrier. Above this maximum, the nuclidic cross sections are found to decrease with an increase in projectile energy. This decrease in cross section is believed to be due to fission of the heavy products caused by high excitation energy and angular momentum. 111 references, 39 figures, 34 tables.« less
External calibration of polarimetric radars using point and distributed targets
NASA Technical Reports Server (NTRS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-01-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
External calibration of polarimetric radars using point and distributed targets
NASA Astrophysics Data System (ADS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-08-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
Factors determining antibody distribution in tumors.
Thurber, Greg M; Schmidt, Michael M; Wittrup, K Dane
2008-02-01
The development of antibody therapies for cancer is increasing rapidly, primarily owing to their specificity. Antibody distribution in tumors is often extremely uneven, however, leading to some malignant cells being exposed to saturating concentrations of antibody, whereas others are completely untargeted. This is detrimental because large regions of cells escape therapy, whereas other regions might be exposed to suboptimal concentrations that promote a selection of resistant mutants. The distribution of antibody depends on a variety of factors, including dose, affinity, antigens per cell and molecular size. Because these parameters are often known or easily estimated, a quick calculation based on simple modeling considerations can predict the uniformity of targeting within a tumor. Such analyses should enable experimental researchers to identify in a straightforward way the limitations in achieving evenly distributed antibody, and design and test improved antibody therapeutics more rationally.
Factors determining antibody distribution in tumors
Thurber, Greg M.; Schmidt, Michael M.; Wittrup, K. Dane
2009-01-01
The development of antibody therapies for cancer is increasing rapidly, primarily owing to their specificity. Antibody distribution in tumors is often extremely uneven, however, leading to some malignant cells being exposed to saturating concentrations of antibody, whereas others are completely untargeted. This is detrimental because large regions of cells escape therapy, whereas other regions might be exposed to suboptimal concentrations that promote a selection of resistant mutants. The distribution of antibody depends on a variety of factors, including dose, affinity, antigens per cell and molecular size. Because these parameters are often known or easily estimated, a quick calculation based on simple modeling considerations can predict the uniformity of targeting within a tumor. Such analyses should enable experimental researchers to identify in a straightforward way the limitations in achieving evenly distributed antibody, and design and test improved antibody therapeutics more rationally. PMID:18179828
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
The Andromeda Optical and Infrared Disk Survey
NASA Astrophysics Data System (ADS)
Sick, Jonathan
The spectral energy distributions of galaxies inform us about a galaxy's stellar populations and interstellar medium, revealing stories of galaxy formation and evolution. How we interpret this light depends in part on our proximity to the galaxy. For nearby galaxies, detailed star formation histories can be extracted from the resolved stellar populations, while more distant galaxies feature the contributions of entire stellar populations within their integrated spectral energy distribution (SED). This thesis aims to resolve whether the techniques used to investigate stellar populations in distant galaxies are consistent with those available for nearby galaxies. As the nearest spiral galaxy, the Andromeda Galaxy (M31) is the ideal testbed for the joint study of resolved stellar populations and panchromatic SEDs. We present the Andromeda Optical and Infrared Disk Survey (ANDROIDS), which adds new near-UV to near-IR (u*g'r'i'JKs) imaging using the MegaCam and WIRCam cameras at the Canada-France-Hawaii telescope to the available M31 panchromatic dataset. To accurately subtract photometric background from our extremely wide-field (14 square degree) mosaics, we present observing and data reduction techniques with sky-target nodding, optimization of image-to-image surface brightness, and a novel hierarchical Bayesian model to trace the background signal while modelling the astrophysical SED. We model the spectral energy distributions of M31 pixels with MAGPHYS (da Cunha et al. 2008) and compare those results to resolved stellar population models of the same pixels from the Panchromatic Hubble Andromeda Treasury (PHAT) survey (Williams et al. 2017). We find substantial (0.3 dex) differences in stellar mass estimates despite a common use of the Chabrier (2003) initial mass function. Stellar mass estimated from the resolved stellar population is larger than any mass estimate from SED models or colour-M/L relations (CMLRs). There is also considerable diversity among CMLR estimators, largely driven by differences in the star formation history prior distribution. We find broad consistency between the star formation history estimated by integrated spectral energy distributions and resolved stars. Generally, spectral energy distribution models yield a stronger inside-out radial metallicity gradient and bias towards younger mean ages than resolved stellar population models.
Dynamical approach to heavy-ion induced fusion using actinide target
NASA Astrophysics Data System (ADS)
Aritomo, Y.; Hagino, K.; Chiba, S.; Nishio, K.
2012-10-01
To treat heavy-ion reactions using actinide target nucleus, we propose a model which takes into account the coupling to the collective states of interacting nuclei in the penetration of the Coulomb barrier and the dynamical evolution of nuclear shape from the contact configuration. A fluctuation-dissipation model (Langevin equation) was applied in the dynamical calculation, where effect of nuclear orientation at the initial impact on the prolately deformed target nucleus was considered. Using this model, we analyzed the experimental data for the mass distribution of fission fragments (MDFF) in the reaction of 36S+238U at several incident energies. Fusion-fission, quasifission and deep-quasi-fission are separated as different trajectories on the potential energy surface. We estimated the fusion cross section of the reaction.
Probabilistic neural networks modeling of the 48-h LC50 acute toxicity endpoint to Daphnia magna.
Niculescu, S P; Lewis, M A; Tigner, J
2008-01-01
Two modeling experiments based on the maximum likelihood estimation paradigm and targeting prediction of the Daphnia magna 48-h LC50 acute toxicity endpoint for both organic and inorganic compounds are reported. The resulting models computational algorithms are implemented as basic probabilistic neural networks with Gaussian kernel (statistical corrections included). The first experiment uses strictly D. magna information for 971 structures as training/learning data and the resulting model targets practical applications. The second experiment uses the same training/learning information plus additional data on another 29 compounds whose endpoint information is originating from D. pulex and Ceriodaphnia dubia. It only targets investigation of the effect of mixing strictly D. magna 48-h LC50 modeling information with small amounts of similar information estimated from related species, and this is done as part of the validation process. A complementary 81 compounds dataset (involving only strictly D. magna information) is used to perform external testing. On this external test set, the Gaussian character of the distribution of the residuals is confirmed for both models. This allows the use of traditional statistical methodology to implement computation of confidence intervals for the unknown measured values based on the models predictions. Examples are provided for the model targeting practical applications. For the same model, a comparison with other existing models targeting the same endpoint is performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy
2012-11-15
Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measuremore » of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was Less-Than-Or-Slanted-Equal-To 3 mm (95th percentiles within {+-}4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from -0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of Less-Than-Or-Slanted-Equal-To 5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance.« less
Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Wells, William M.; Tempany, Clare M.; Cormack, Robert A.
2012-01-01
Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was ⩽3 mm (95th percentiles within ±4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from −0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of ⩽5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance. PMID:23127078
NASA Technical Reports Server (NTRS)
Reschke, Millard F.; Somers, Jeffrey T.; Feiveson, Alan H.; Leigh, R. John; Wood, Scott J.; Paloski, William H.; Kornilova, Ludmila
2006-01-01
We studied the ability to hold the eyes in eccentric horizontal or vertical gaze angles in 68 normal humans, age range 19-56. Subjects attempted to sustain visual fixation of a briefly flashed target located 30 in the horizontal plane and 15 in the vertical plane in a dark environment. Conventionally, the ability to hold eccentric gaze is estimated by fitting centripetal eye drifts by exponential curves and calculating the time constant (t(sub c)) of these slow phases of gazeevoked nystagmus. Although the distribution of time-constant measurements (t(sub c)) in our normal subjects was extremely skewed due to occasional test runs that exhibited near-perfect stability (large t(sub c) values), we found that log10(tc) was approximately normally distributed within classes of target direction. Therefore, statistical estimation and inference on the effect of target direction was performed on values of z identical with log10t(sub c). Subjects showed considerable variation in their eyedrift performance over repeated trials; nonetheless, statistically significant differences emerged: values of tc were significantly higher for gaze elicited to targets in the horizontal plane than for the vertical plane (P less than 10(exp -5), suggesting eccentric gazeholding is more stable in the horizontal than in the vertical plane. Furthermore, centrifugal eye drifts were observed in 13.3, 16.0 and 55.6% of cases for horizontal, upgaze and downgaze tests, respectively. Fifth percentile values of the time constant were estimated to be 10.2 sec, 3.3 sec and 3.8 sec for horizontal, upward and downward gaze, respectively. The difference between horizontal and vertical gazeholding may be ascribed to separate components of the velocity position neural integrator for eye movements, and to differences in orbital mechanics. Our statistical method for representing the range of normal eccentric gaze stability can be readily applied in a clinical setting to patients who were exposed to environments that may have modified their central integrators and thus require monitoring. Patients with gaze-evoked nystagmus can be flagged by comparing to the above established normative criteria.
Brady, Eoghan; Hill, Kenneth
2017-01-01
Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
A research on snow distribution in mountainous area using airborne laser scanning
NASA Astrophysics Data System (ADS)
Nishihara, T.; Tanise, A.
2015-12-01
In snowy cold regions, the snowmelt water stored in dams in early spring meets the water demand for the summer season. Thus, snowmelt water serves as an important water resource. However, snowmelt water also can cause snowmelt floods. Therefore, it's necessary to estimate snow water equivalent in a dam basin as accurately as possible. For this reason, the dam operation offices in Hokkaido, Japan conduct snow surveys every March to estimate snow water equivalent in the dam basin. In estimating, we generally apply a relationship between elevation and snow water equivalent. But above the forest line, snow surveys are generally conducted along ridges due to the risk of avalanches or other hazards. As a result, snow water equivalent above the forest line is significantly underestimated. In this study, we conducted airborne laser scanning to measure snow depth in the high elevation area including above the forest line twice in the same target area (in 2012 and 2015) and analyzed the relationships of snow depth above the forest line and some indicators of terrain. Our target area was the Chubetsu dam basin. It's located in central Hokkaido, a high elevation area in a mountainous region. Hokkaido is a northernmost island of Japan. Therefore it's a cold and snowy region. The target range for airborne laser scanning was 10km2. About 60% of the target range was above the forest line. First, we analyzed the relationship between elevation and snow depth. Below the forest line, the snow depth increased linearly with elevation increase. On the other hand, above the forest line, the snow depth varied greatly. Second, we analyzed the relationship between overground-openness and snow depth above the forest line. Overground-openness is an indicator quantifying how far a target point is above or below the surrounding surface. As a result, a simple relationship was clarified. Snow depth decreased linearly as overground-openness increases. This means that areas with heavy snow cover are distributed in valleys and that of light cover are on ridges. Lastly we compared the result of 2012 and that of 2015. The same characteristic of snow depth, above mentioned, was found. However, regression coefficients of linear equations were different according to the weather conditions of each year.
Assessing the Clinical Impact of Approximations in Analytical Dose Calculations for Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuemann, Jan, E-mail: jschuemann@mgh.harvard.edu; Giantsoudi, Drosoula; Grassberger, Clemens
2015-08-01
Purpose: To assess the impact of approximations in current analytical dose calculation methods (ADCs) on tumor control probability (TCP) in proton therapy. Methods: Dose distributions planned with ADC were compared with delivered dose distributions as determined by Monte Carlo simulations. A total of 50 patients were investigated in this analysis with 10 patients per site for 5 treatment sites (head and neck, lung, breast, prostate, liver). Differences were evaluated using dosimetric indices based on a dose-volume histogram analysis, a γ-index analysis, and estimations of TCP. Results: We found that ADC overestimated the target doses on average by 1% to 2%more » for all patients considered. The mean dose, D95, D50, and D02 (the dose value covering 95%, 50% and 2% of the target volume, respectively) were predicted within 5% of the delivered dose. The γ-index passing rate for target volumes was above 96% for a 3%/3 mm criterion. Differences in TCP were up to 2%, 2.5%, 6%, 6.5%, and 11% for liver and breast, prostate, head and neck, and lung patients, respectively. Differences in normal tissue complication probabilities for bladder and anterior rectum of prostate patients were less than 3%. Conclusion: Our results indicate that current dose calculation algorithms lead to underdosage of the target by as much as 5%, resulting in differences in TCP of up to 11%. To ensure full target coverage, advanced dose calculation methods like Monte Carlo simulations may be necessary in proton therapy. Monte Carlo simulations may also be required to avoid biases resulting from systematic discrepancies in calculated dose distributions for clinical trials comparing proton therapy with conventional radiation therapy.« less
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
SU-F-T-24: Impact of Source Position and Dose Distribution Due to Curvature of HDR Transfer Tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, A; Yue, N
2016-06-15
Purpose: Brachytherapy is a highly targeted from of radiotherapy. While this may lead to ideal dose distributions on the treatment planning system, a small error in source location can lead to change in the dose distribution. The purpose of this study is to quantify the impact on source position error due to curvature of the transfer tubes and the impact this may have on the dose distribution. Methods: Since the source travels along the midline of the tube, an estimate of the positioning error for various angles of curvature was determined using geometric properties of the tube. Based on themore » range of values a specific shift was chosen to alter the treatment plans for a number of cervical cancer patients who had undergone HDR brachytherapy boost using tandem and ovoids. Impact of dose to target and organs at risk were determined and checked against guidelines outlined by radiation oncologist. Results: The estimate of the positioning error was 2mm short of the expected position (the curved tube can only cause the source to not reach as far as with a flat tube). Quantitative impact on the dose distribution is still in the process of being analyzed. Conclusion: The accepted positioning tolerance for the source position of a HDR brachytherapy unit is plus or minus 1mm. If there is an additional 2mm discrepancy due to tube curvature, this can result in a source being 1mm to 3mm short of the expected location. While we do always attempt to keep the tubes straight, in some cases such as with tandem and ovoids, the tandem connector does not extend as far out from the patient so the ovoid tubes always contain some degree of curvature. The dose impact of this may be significant.« less
Kepler False Positive Rate & Occurrence of Earth-size and Larger Planets
NASA Astrophysics Data System (ADS)
Fressin, Francois; Torres, G.; Charbonneau, D.; Kepler Team
2013-01-01
We model the Kepler exoplanet survey targets and their background stars to estimate the occurrence of astrophysical configurations which could mimic an exoplanetary transit. Using real noise level estimates, we compute the number and the characteristics of detectable eclipsing pairs involving stars or planets. We select the fraction of those that would pass the Kepler candidate vetting procedure, including the modeling of the centroid shift of their position on the Kepler camera. By comparing their distribution with that of the Kepler Object Interests from the first 6 quarters of Kepler data, we quantify the false positive rate of Kepler, as a function of candidate planet size and period. Most importantly, this approach allows quantifying and characterizing the distribution of planets, with no assumption of any prior, as the remaining population of the Kepler candidate list minus the simulated population of alternate astrophysical causes. We study the actual detection recovery rate for Kepler that allows reproducing both the KOI size and period distribution as well as their SNR distribution. We estimate the occurrence of planets down to Earth-size, and study if their frequency is correlated with their host star spectral type. This work is supported by the Spitzer General Observer Proposal #80117 - Validating the First Habitable-Zone Planet Candidates Identified by the NASA Kepler Mission, and by the Kepler Participating Scientist Contract led by David Charbonneau, to confirm the planetary nature of candidates identified by the Kepler mission
Small area variation in diabetes prevalence in Puerto Rico
Tierney, Edward F.; Burrows, Nilka R.; Barker, Lawrence E.; Beckles, Gloria L.; Boyle, James P.; Cadwell, Betsy L.; Kirtland, Karen A.; Thompson, Theodore J.
2015-01-01
Objective To estimate the 2009 prevalence of diagnosed diabetes in Puerto Rico among adults ≥ 20 years of age in order to gain a better understanding of its geographic distribution so that policymakers can more efficiently target prevention and control programs. Methods A Bayesian multilevel model was fitted to the combined 2008–2010 Behavioral Risk Factor Surveillance System and 2009 United States Census data to estimate diabetes prevalence for each of the 78 municipios (counties) in Puerto Rico. Results The mean unadjusted estimate for all counties was 14.3% (range by county, 9.9%–18.0%). The average width of the confidence intervals was 6.2%. Adjusted and unadjusted estimates differed little. Conclusions These 78 county estimates are higher on average and showed less variability (i.e., had a smaller range) than the previously published estimates of the 2008 diabetes prevalence for all United States counties (mean, 9.9%; range, 3.0%–18.2%). PMID:23939364
NASA Astrophysics Data System (ADS)
Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.
2017-12-01
In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
Bhat, Somanath; McLaughlin, Jacob L H; Emslie, Kerry R
2011-02-21
Digital polymerase chain reaction (dPCR) has the potential to enable accurate quantification of target DNA copy number provided that all target DNA molecules are successfully amplified. Following duplex dPCR analysis from a linear DNA target sequence that contains single copies of two independent template sequences, we have observed that amplification of both templates in a single partition does not always occur. To investigate this finding, we heated the target DNA solution to 95 °C for increasing time intervals and then immediately chilled on ice prior to preparing the dPCR mix. We observed an exponential decline in estimated copy number (R(2)≥ 0.98) of the two template sequences when amplified from either a linearized plasmid or a 388 base pair (bp) amplicon containing the same two template sequences. The distribution of amplifiable templates and the final concentration (copies per µL) were both affected by heat treatment of the samples at 95 °C from 0 s to 30 min. The proportion of target sequences from which only one of the two templates was amplified in a single partition (either 1507 or hmg only) increased over time, while the proportion of target sequences where both templates were amplified (1507 and hmg) in each individual partition declined rapidly from 94% to 52% (plasmid) and 88% to 31% (388 bp amplicon) suggesting an increase in number of targets from which both templates no longer amplify. A 10 min incubation at 95 °C reduced the initial amplifiable template concentration of the plasmid and the 388 bp amplicon by 59% and 91%, respectively. To determine if a similar decrease in amplifiable target occurs during the default pre-activation step of typical PCR amplification protocol, we used mastermixes with a 20 s or 10 min hot-start. The choice of mastermix and consequent pre-activation time did not affect the estimated plasmid concentration. Therefore, we conclude that prolonged exposure of this DNA template to elevated temperatures could lead to significant bias in dPCR measurements. However, care must be taken when designing PCR and non-PCR based experiments by reducing exposure of the DNA template to sustained elevated temperatures in order to improve accuracy in copy number estimation and concentration determination.
Alternatives to an extended Kalman Filter for target image tracking
NASA Astrophysics Data System (ADS)
Leuthauser, P. R.
1981-12-01
Four alternative filters are compared to an extended Kalman filter (EKF) algorithm for tracking a distributed (elliptical) source target in a closed loop tracking problem, using outputs from a forward looking (FLIR) sensor as measurements. These were (1) an EKF with (second order) bias correction term, (2) a constant gain EKF, (3) a constant gain EKF with bias correction term, and (4) a statistically linearized filter. Estimates are made of both actual target motion and of apparent motion due to atmospheric jitter. These alternative designs are considered specifically to address some of the significant biases exhibited by an EKF due to initial acquisition difficulties, unmodelled maneuvering by the target, low signal-to-noise ratio, and real world conditions varying significantly from those assumed in the filter design (robustness). Filter performance was determined with a Monte Carlo study under both ideal and non ideal conditions for tracking targets on a constant velocity cross range path, and during constant acceleration turns of 5G, 10G, and 20G.
Experimental simulation of impact cratering on icy satellites
NASA Technical Reports Server (NTRS)
Greeley, R.; Fink, J. H.; Gault, D. E.; Guest, J. E.
1982-01-01
Cratering processes on icy satellites were simulated in a series of 102 laboratory impact experiments involving a wide range of target materials. For impacts into homogeneous clay slurries with impact energies ranging from five million to ten billion ergs, target yield strengths ranged from 100 to 38 Pa, and apparent viscosities ranged from 8 to 200 Pa s. Bowl-shaped craters, flat-floored craters, central peak craters with high or little relief, and craters with no relief were observed. Crater diameters increased steadily as energies were raised. A similar sequence was seen for experiment in which impact energy was held constant but target viscosity and strength progressively decreases. The experiments suggest that the physical properties of the target media relative to the gravitationally induced stresses determined the final crater morphology. Crater palimpsests could form by prompt collapse of large central peak craters formed in low target strength materials. Ages estimated from crater size-frequency distributions that include these large craters may give values that are too high.
Calculation of the Frequency Distribution of the Energy Deposition in DNA Volumes by Heavy Ions
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cicinotta, Francis A.
2012-01-01
Radiation quality effects are largely determined by energy deposition in small volumes of characteristic sizes less than 10 nm representative of short-segments of DNA, the DNA nucleosome, or molecules initiating oxidative stress in the nucleus, mitochondria, or extra-cellular matrix. On this scale, qualitatively distinct types of molecular damage are possible for high linear energy transfer (LET) radiation such as heavy ions compared to low LET radiation. Unique types of DNA lesions or oxidative damages are the likely outcome of the energy deposition. The frequency distribution for energy imparted to 1-20 nm targets per unit dose or particle fluence is a useful descriptor and can be evaluated as a function of impact parameter from an ions track. In this work, the simulation of 1-Gy irradiation of a cubic volume of 5 micron by: 1) 450 (1)H(+) ions, 300 MeV; 2) 10 (12)C(6+) ions, 290 MeV/amu and 3) (56)Fe(26+) ions, 1000 MeV/amu was done with the Monte-Carlo simulation code RITRACKS. Cylindrical targets are generated in the irradiated volume, with random orientation. The frequency distribution curves of the energy deposited in the targets is obtained. For small targets (i.e. <25 nm size), the probability of an ion to hit a target is very small; therefore a large number of tracks and targets as well as a large number of histories are necessary to obtain statistically significant results. This simulation is very time-consuming and is difficult to perform by using the original version of RITRACKS. Consequently, the code RITRACKS was adapted to use multiple CPU on a workstation or on a computer cluster. To validate the simulation results, similar calculations were performed using targets with fixed position and orientation, for which experimental data are available [5]. Since the probability of single- and double-strand breaks in DNA as function of energy deposited is well know, the results that were obtained can be used to estimate the yield of DSB, and can be extended to include other targeted or non-target effects.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuemann, J; Grassberger, C; Paganetti, H
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less
Wu, Zhijin; Liu, Dongmei; Sui, Yunxia
2008-02-01
The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.
Analysis of Regolith Simulant Ejecta Distributions from Normal Incident Hypervelocity Impact
NASA Technical Reports Server (NTRS)
Edwards, David L.; Cooke, William; Suggs, Rob; Moser, Danielle E.
2008-01-01
The National Aeronautics and Space Administration (NASA) has established the Constellation Program. The Constellation Program has defined one of its many goals as long-term lunar habitation. Critical to the design of a lunar habitat is an understanding of the lunar surface environment; of specific importance is the primary meteoroid and subsequent ejecta environment. The document, NASA SP-8013 'Meteoroid Environment Model Near Earth to Lunar Surface', was developed for the Apollo program in 1969 and contains the latest definition of the lunar ejecta environment. There is concern that NASA SP-8013 may over-estimate the lunar ejecta environment. NASA's Meteoroid Environment Office (MEO) has initiated several tasks to improve the accuracy of our understanding of the lunar surface ejecta environment. This paper reports the results of experiments on projectile impact into powdered pumice and unconsolidated JSC-1A Lunar Mare Regolith simulant targets. Projectiles were accelerated to velocities between 2.45 and 5.18 km/s at normal incidence using the Ames Vertical Gun Range (AVGR). The ejected particles were detected by thin aluminum foil targets strategically placed around the impact site and angular ejecta distributions were determined. Assumptions were made to support the analysis which include; assuming ejecta spherical symmetry resulting from normal impact and all ejecta particles were of mean target particle size. This analysis produces a hemispherical flux density distribution of ejecta with sufficient velocity to penetrate the aluminum foil detectors.
Walker, Robin; Benson, Valerie
2015-02-04
We (Walker & Benson, 2013) reported studies in which the spatial effects of distractors on the remote distractor effect (RDE) and saccadic inhibition (SI) were examined. Distractors remote from the target increased mean latency and the skew of the distractor-related distributions, without the presence of dips that are regarded as the hallmark of SI. We further showed that early onset distractors had similar effects although these would not be consistent with existing estimates of the duration of SI (of around 60-70 ms). McIntosh and Buonocore (2014) report a simulation showing that skewed latency distributions can arise from the putative SI mechanism and they also highlighted a number of methodological considerations regarding the RDE and SI as measures of saccadic distractor effects (SDEs). Here we evaluate these claims and note that the measures of SI obtained by subtracting latency distributions (specifically the decrease in saccade frequency--or dip duration) are no more diagnostic of a single inhibitory process, or more sensitive indicators of it, than is median latency. Furthermore the evidence of inhibitory influences of small distractors presented close to the target is incompatible with the explanations of both the RDE and SI. We conclude that saccadic distractor effects may be a more inclusive term to encompass the different characteristics of behavioral effects of underlying saccade target selection. © 2015 ARVO.
2015-09-30
analysis of trends and shifts in characteristics of specific sources contributing to the soundscape over time. The primary sources of interest are baleen... soundscape . Many of the target acoustic signal categories have been well characterized allowing for development of automated spectrogram correlation...to determine the extent and range over which each class of sources contributes to the regional soundscape . Estimates of signal detection range will
Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
NASA Astrophysics Data System (ADS)
Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.
2015-08-01
Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.
NASA Astrophysics Data System (ADS)
Govindarajan, A.; Pineda, J.; Purcell, M.; Tradd, K.; Packard, G.; Girard, A.; Dennett, M.; Breier, J. A., Jr.
2016-02-01
We present a new method to estimate the distribution of invertebrate larvae relative to environmental variables such as temperature, salinity, and circulation. A large volume in situ filtering system developed for discrete biogeochemical sampling in the deep-sea (the Suspended Particulate Rosette "SUPR" multisampler) was mounted to the autonomous underwater vehicle REMUS 600 for coastal larval and environmental sampling. We describe the results of SUPR-REMUS deployments conducted in Buzzards Bay, Massachusetts (2014) and west of Martha's Vineyard, Massachusetts (2015). We collected discrete samples cross-shore and from surface, middle, and bottom layers of the water column. Samples were preserved for DNA analysis. Our Buzzards Bay deployment targeted barnacle larvae, which are abundant in late winter and early spring. For these samples, we used morphological analysis and DNA barcodes generated by Sanger sequencing to obtain stage and species-specific cross-shore and vertical distributions. We targeted bivalve larvae in our 2015 deployments, and genetic analysis of larvae from these samples is underway. For these samples, we are comparing species barcode data derived from traditional Sanger sequencing of individuals to those obtained from next generation sequencing (NGS) of bulk plankton samples. Our results demonstrate the utility of autonomous sampling combined with DNA barcoding for studying larval distributions and transport dynamics.
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
RF tomography of metallic objects in free space: preliminary results
NASA Astrophysics Data System (ADS)
Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher
2015-05-01
RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.
Reconciling Top-Down and Bottom-Up Estimates of Oil and Gas Methane Emissions in the Barnett Shale
NASA Astrophysics Data System (ADS)
Hamburg, S.
2015-12-01
Top-down approaches that use aircraft, tower, or satellite-based measurements of well-mixed air to quantify regional methane emissions have typically estimated higher emissions from the natural gas supply chain when compared to bottom-up inventories. A coordinated research campaign in October 2013 used simultaneous top-down and bottom-up approaches to quantify total and fossil methane emissions in the Barnett Shale region of Texas. Research teams have published individual results including aircraft mass-balance estimates of regional emissions and a bottom-up, 25-county region spatially-resolved inventory. This work synthesizes data from the campaign to directly compare top-down and bottom-up estimates. A new analytical approach uses statistical estimators to integrate facility emission rate distributions from unbiased and targeted high emission site datasets, which more rigorously incorporates the fat-tail of skewed distributions to estimate regional emissions of well pads, compressor stations, and processing plants. The updated spatially-resolved inventory was used to estimate total and fossil methane emissions from spatial domains that match seven individual aircraft mass balance flights. Source apportionment of top-down emissions between fossil and biogenic methane was corroborated with two independent analyses of methane and ethane ratios. Reconciling top-down and bottom-up estimates of fossil methane emissions leads to more accurate assessment of natural gas supply chain emission rates and the relative contribution of high emission sites. These results increase our confidence in our understanding of the climate impacts of natural gas relative to more carbon-intensive fossil fuels and the potential effectiveness of mitigation strategies.
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
Momentum Flux Determination Using the Multi-beam Poker Flat Incoherent Scatter Radar
NASA Technical Reports Server (NTRS)
Nicolls, M. J.; Fritts, D. C.; Janches, Diego; Heinselman, C. J.
2012-01-01
In this paper, we develop an estimator for the vertical flux of horizontal momentum with arbitrary beam pointing, applicable to the case of arbitrary but fixed beam pointing with systems such as the Poker Flat Incoherent Scatter Radar (PFISR). This method uses information from all available beams to resolve the variances of the wind field in addition to the vertical flux of both meridional and zonal momentum, targeted for high-frequency wave motions. The estimator utilises the full covariance of the distributed measurements, which provides a significant reduction in errors over the direct extension of previously developed techniques and allows for the calculation of an error covariance matrix of the estimated quantities. We find that for the PFISR experiment, we can construct an unbiased and robust estimator of the momentum flux if sufficient and proper beam orientations are chosen, which can in the future be optimized for the expected frequency distribution of momentum-containing scales. However, there is a potential trade-off between biases and standard errors introduced with the new approach, which must be taken into account when assessing the momentum fluxes. We apply the estimator to PFISR measurements on 23 April 2008 and 21 December 2007, from 60-85 km altitude, and show expected results as compared to mean winds and in relation to the measured vertical velocity variances.
NASA Astrophysics Data System (ADS)
Fuente, David; Gakii Gatua, Josephine; Ikiara, Moses; Kabubo-Mariara, Jane; Mwaura, Mbutu; Whittington, Dale
2016-06-01
The increasing block tariff (IBT) is among the most widely used tariffs by water utilities, particularly in developing countries. This is due in part to the perception that the IBT can effectively target subsidies to low-income households. Combining data on households' socioeconomic status and metered water use, this paper examines the distributional incidence of subsidies delivered through the IBT in Nairobi, Kenya. Contrary to conventional wisdom, we find that high-income residential and nonresidential customers receive a disproportionate share of subsidies and that subsidy targeting is poor even among households with a private metered connection. We also find that stated expenditure on water, a commonly used means of estimating water use, is a poor proxy for metered use and that previous studies on subsidy incidence underestimate the magnitude of the subsidy delivered through water tariffs. These findings have implications for both the design and evaluation of water tariffs in developing countries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karabeshkin, K. V., E-mail: yanikolaus@yandex.ru; Karaseov, P. A.; Titov, A. I.
2016-08-15
The depth distributions of structural damage induced in Si at room temperature by the implantation of P and PF{sub 4} with energies from 0.6 to 3.2 keV/amu are experimentally studied in a wide range of doses. It is found that, in all cases, the implantation of molecular PF{sub 4} ions forms practically single-mode defect distributions, with maximum at the target surface. This effect is caused by an increase in the generation of primary defects at the surface of the target. Individual cascades formed by atoms comprising molecule effectively overlap in the surface vicinity; this overlap gives rise to nonlinear processesmore » in combined cascades due to a high density of displacements in such cascades. Quantitative estimation of increase of effectiveness of point defect generation by PF{sub 4} ions in respect to P ions is done on the base of experimental data.« less
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009
A Comparison of Agent-Based Models and the Parametric G-Formula for Causal Inference.
Murray, Eleanor J; Robins, James M; Seage, George R; Freedberg, Kenneth A; Hernán, Miguel A
2017-07-15
Decision-making requires choosing from treatments on the basis of correctly estimated outcome distributions under each treatment. In the absence of randomized trials, 2 possible approaches are the parametric g-formula and agent-based models (ABMs). The g-formula has been used exclusively to estimate effects in the population from which data were collected, whereas ABMs are commonly used to estimate effects in multiple populations, necessitating stronger assumptions. Here, we describe potential biases that arise when ABM assumptions do not hold. To do so, we estimated 12-month mortality risk in simulated populations differing in prevalence of an unknown common cause of mortality and a time-varying confounder. The ABM and g-formula correctly estimated mortality and causal effects when all inputs were from the target population. However, whenever any inputs came from another population, the ABM gave biased estimates of mortality-and often of causal effects even when the true effect was null. In the absence of unmeasured confounding and model misspecification, both methods produce valid causal inferences for a given population when all inputs are from that population. However, ABMs may result in bias when extrapolated to populations that differ on the distribution of unmeasured outcome determinants, even when the causal network linking variables is identical. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
An analytical framework for estimating aquatic species density from environmental DNA
Chambert, Thierry; Pilliod, David S.; Goldberg, Caren S.; Doi, Hideyuki; Takahara, Teruhiko
2018-01-01
Environmental DNA (eDNA) analysis of water samples is on the brink of becoming a standard monitoring method for aquatic species. This method has improved detection rates over conventional survey methods and thus has demonstrated effectiveness for estimation of site occupancy and species distribution. The frontier of eDNA applications, however, is to infer species density. Building upon previous studies, we present and assess a modeling approach that aims at inferring animal density from eDNA. The modeling combines eDNA and animal count data from a subset of sites to estimate species density (and associated uncertainties) at other sites where only eDNA data are available. As a proof of concept, we first perform a cross-validation study using experimental data on carp in mesocosms. In these data, fish densities are known without error, which allows us to test the performance of the method with known data. We then evaluate the model using field data from a study on a stream salamander species to assess the potential of this method to work in natural settings, where density can never be known with absolute certainty. Two alternative distributions (Normal and Negative Binomial) to model variability in eDNA concentration data are assessed. Assessment based on the proof of concept data (carp) revealed that the Negative Binomial model provided much more accurate estimates than the model based on a Normal distribution, likely because eDNA data tend to be overdispersed. Greater imprecision was found when we applied the method to the field data, but the Negative Binomial model still provided useful density estimates. We call for further model development in this direction, as well as further research targeted at sampling design optimization. It will be important to assess these approaches on a broad range of study systems.
Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne
2012-01-01
AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586
Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki
2018-01-01
Abstract DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure. PMID:29800455
Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki
2018-05-01
DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure.
NASA Astrophysics Data System (ADS)
Salthammer, Tunga; Schripp, Tobias
2015-04-01
In the indoor environment, distribution and dynamics of an organic compound between gas phase, particle phase and settled dust must be known for estimating human exposure. This, however, requires a detailed understanding of the environmentally important compound parameters, their interrelation and of the algorithms for calculating partitioning coefficients. The parameters of major concern are: (I) saturation vapor pressure (PS) (of the subcooled liquid); (II) Henry's law constant (H); (III) octanol/water partition coefficient (KOW); (IV) octanol/air partition coefficient (KOA); (V) air/water partition coefficient (KAW) and (VI) settled dust properties like density and organic content. For most of the relevant compounds reliable experimental data are not available and calculated gas/particle distributions can widely differ due to the uncertainty in predicted Ps and KOA values. This is not a big problem if the target compound is of low (<10-6 Pa) or high (>10-2 Pa) volatility, but in the intermediate region even small changes in Ps or KOA will have a strong impact on the result. Moreover, the related physical processes might bear large uncertainties. The KOA value can only be used for particle absorption from the gas phase if the organic portion of the particle or dust is high. The Junge- and Pankow-equation for calculating the gas/particle distribution coefficient KP do not consider the physical and chemical properties of the particle surface area. It is demonstrated by error propagation theory and Monte-Carlo simulations that parameter uncertainties from estimation methods for molecular properties and variations of indoor conditions might strongly influence the calculated distribution behavior of compounds in the indoor environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng Chuan; Giantsoudi, Drosoula; Grassberger, Clemens
2013-05-15
Purpose: Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. Methods: For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposedmore » lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. Results: IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. Conclusions: In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled.« less
Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne
2012-04-01
Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in children. A population pharmacokinetic model was developed to describe both once and twice daily pharmacokinetic profiles of abacavir in infants and toddlers. Standard dosage regimen is associated with large interindividual variability in abacavir concentrations. A maximum a posteriori probability Bayesian estimator of AUC(0-) (t) based on three time points (0, 1 or 2, and 3 h) is proposed to support area under the concentration-time curve (AUC) targeted individualized therapy in infants and toddlers. To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration-time curve (AUC) targeted dosage and individualize therapy. The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation-estimation method. The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 () h−1 (RSE 6.3%), apparent central volume of distribution 4.94 () (RSE 28.7%), apparent peripheral volume of distribution 8.12 () (RSE14.2%), apparent intercompartment clearance 1.25 () h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC(0-) (t) was developed from the final model and can be used routinely to optimize individual dosing. © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.
Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction
Chua, Sing Yee; Guo, Ningqun; Tan, Ching Seong; Wang, Xin
2017-01-01
Accuracy is an important measure of system performance and remains a challenge in 3D range gated reconstruction despite the advancement in laser and sensor technology. The weighted average model that is commonly used for range estimation is heavily influenced by the intensity variation due to various factors. Accuracy improvement in term of range estimation is therefore important to fully optimise the system performance. In this paper, a 3D range gated reconstruction model is derived based on the operating principles of range gated imaging and time slicing reconstruction, fundamental of radiant energy, Laser Detection And Ranging (LADAR), and Bidirectional Reflection Distribution Function (BRDF). Accordingly, a new range estimation model is proposed to alleviate the effects induced by distance, target reflection, and range distortion. From the experimental results, the proposed model outperforms the conventional weighted average model to improve the range estimation for better 3D reconstruction. The outcome demonstrated is of interest to various laser ranging applications and can be a reference for future works. PMID:28872589
Niemi, R M; Heikkilä, M P; Lahti, K; Kalso, S; Niemelä, S I
2001-06-01
Enumeration of coliform bacteria and Escherichia coli is the most widely used method in the estimation of hygienic quality of drinking water. The yield of target bacteria and the species composition of different populations of coliform bacteria may depend on the method.Three methods were compared. Three membrane filtration methods were used for the enumeration of coliform bacteria in shallow well waters. The yield of confirmed coliform bacteria was highest on Differential Coliform agar, followed by LES Endo agar. Differential Coliform agar had the highest proportion of typical colonies, of which 74% were confirmed as belonging to the Enterobacteriaceae. Of the typical colonies on Lactose Tergitol 7 TTC agar, 75% were confirmed as Enterobacteriaceae, whereas 92% of typical colonies on LES Endo agar belonged to the Enterobacteriaceae. LES Endo agar yielded many Serratia strains, Lactose Tergitol 7 TTC agar yielded numerous strains of Rahnella aquatilis and Enterobacter, whereas Differential Coliform agar yielded the widest range of species. The yield of coliform bacteria varied between methods. Each method compared had a characteristic species distribution of target bacteria and a typical level of interference of non-target bacteria. Identification with routine physiological tests to distinct species was hampered by the slight differences between species. High yield and sufficient selectivity are difficult to achieve simultaneously, especially if the target group is diverse. The results showed that several aspects of method performance should be considered, and that the target group must be distinctly defined to enable method comparisons.
Relevance of cosmic gamma rays to the mass of gas in the galaxy
NASA Technical Reports Server (NTRS)
Bhat, C. L.; Mayer, C. J.; Wolfendale, A. W.
1985-01-01
The bulk of the diffuse gamma-ray flux comes from cosmic ray interactions in the interstellar medium. A knowledge of the large scale spatial distribution of the Galactic gamma-rays and the cosmic rays enables the distribution of the target gas to be examined. An approach of this type is used here to estimate the total mass of the molecular gas in the galaxy. It is shown to be much less than that previously derived, viz., approximately 6 x 10 to the 8th power solar masses within the solar radius as against approximately 3 x 10 to the 9th power based on 2.6 mm CO measurements.
Whitmore, Roy W; Chen, Wenlin
2013-12-04
The ability to infer human exposure to substances from drinking water using monitoring data helps determine and/or refine potential risks associated with drinking water consumption. We describe a survey sampling approach and its application to an atrazine groundwater monitoring study to adequately characterize upper exposure centiles and associated confidence intervals with predetermined precision. Study design and data analysis included sampling frame definition, sample stratification, sample size determination, allocation to strata, analysis weights, and weighted population estimates. Sampling frame encompassed 15 840 groundwater community water systems (CWS) in 21 states throughout the U. S. Median, and 95th percentile atrazine concentrations were 0.0022 and 0.024 ppb, respectively, for all CWS. Statistical estimates agreed with historical monitoring results, suggesting that the study design was adequate and robust. This methodology makes no assumptions regarding the occurrence distribution (e.g., lognormality); thus analyses based on the design-induced distribution provide the most robust basis for making inferences from the sample to target population.
Recovering Galaxy Properties Using Gaussian Process SED Fitting
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Awan, Humna
2018-01-01
Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.
Gillebert, Celine R; Petersen, Anders; Van Meel, Chayenne; Müller, Tanja; McIntyre, Alexandra; Wagemans, Johan; Humphreys, Glyn W
2016-06-01
Previous studies have shown that the perceptual organization of the visual scene constrains the deployment of attention. Here we investigated how the organization of multiple elements into larger configurations alters their attentional weight, depending on the "pertinence" or behavioral importance of the elements' features. We assessed object-based effects on distinct aspects of the attentional priority map: top-down control, reflecting the tendency to encode targets rather than distracters, and the spatial distribution of attention weights across the visual scene, reflecting the tendency to report elements belonging to the same rather than different objects. In 2 experiments participants had to report the letters in briefly presented displays containing 8 letters and digits, in which pairs of characters could be connected with a line. Quantitative estimates of top-down control were obtained using Bundesen's Theory of Visual Attention (1990). The spatial distribution of attention weights was assessed using the "paired response index" (PRI), indicating responses for within-object pairs of letters. In Experiment 1, grouping along the task-relevant dimension (targets with targets and distracters with distracters) increased top-down control and enhanced the PRI; in contrast, task-irrelevant grouping (targets with distracters) did not affect performance. In Experiment 2, we disentangled the effect of target-target and distracter-distracter grouping: Pairwise grouping of distracters enhanced top-down control whereas pairwise grouping of targets changed the PRI. We conclude that object-based perceptual representations interact with pertinence values (of the elements' features and location) in the computation of attention weights, thereby creating a widespread pattern of attentional facilitation across the visual scene. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
TU-AB-202-03: Prediction of PET Transfer Uncertainty by DIR Error Estimating Software, AUTODIRECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Phillips, J
2016-06-15
Purpose: Deformable image registration (DIR) is a powerful tool, but DIR errors can adversely affect its clinical applications. To estimate voxel-specific DIR uncertainty, a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), has been developed and validated. This work tests the ability of this software to predict uncertainty for the transfer of standard uptake values (SUV) from positron-emission tomography (PET) with DIR. Methods: Virtual phantoms are used for this study. Each phantom has a planning computed tomography (CT) image and a diagnostic PET-CT image set. A deformation was digitally applied to the diagnostic CT to create the planningmore » CT image and establish a known deformation between the images. One lung and three rectum patient datasets were employed to create the virtual phantoms. Both of these sites have difficult deformation scenarios associated with them, which can affect DIR accuracy (lung tissue sliding and changes in rectal filling). The virtual phantoms were created to simulate these scenarios by introducing discontinuities in the deformation field at the lung rectum border. The DIR algorithm from Plastimatch software was applied to these phantoms. The SUV mapping errors from the DIR were then compared to that predicted by AUTODIRECT. Results: The SUV error distributions closely followed the AUTODIRECT predicted error distribution for the 4 test cases. The minimum and maximum PET SUVs were produced from AUTODIRECT at 95% confidence interval before applying gradient-based SUV segmentation for each of these volumes. Notably, 93.5% of the target volume warped by the true deformation was included within the AUTODIRECT-predicted maximum SUV volume after the segmentation, while 78.9% of the target volume was within the target volume warped by Plastimatch. Conclusion: The AUTODIRECT framework is able to predict PET transfer uncertainty caused by DIR, which enables an understanding of the associated target volume uncertainty.« less
Starting Conditions for Hydrothermal Systems Underneath Martian Craters: Hydrocode Modeling
NASA Technical Reports Server (NTRS)
Pierazzo, E.; Artemieva, N. A.; Ivanov, B. A.
2004-01-01
Mars is the most Earth-like of the Solar System s planets, and the first place to look for any sign of present or past extraterrestrial life. Its surface shows many features indicative of the presence of surface and sub-surface water, while impact cratering and volcanism have provided temporary and local surface heat sources throughout Mars geologic history. Impact craters are widely used ubiquitous indicators for the presence of sub-surface water or ice on Mars. In particular, the presence of significant amounts of ground ice or water would cause impact-induced hydrothermal alteration at Martian impact sites. The realization that hydrothermal systems are possible sites for the origin and early evolution of life on Earth has given rise to the hypothesis that hydrothermal systems may have had the same role on Mars. Rough estimates of the heat generated in impact events have been based on scaling relations, or thermal data based on terrestrial impacts on crystalline basements. Preliminary studies also suggest that melt sheets and target uplift are equally important heat sources for the development of a hydrothermal system, while its lifetime depends on the volume and cooling rate of the heat source, as well as the permeability of the host rocks. We present initial results of two-dimensional (2D) and three-dimensional (3D) simulations of impacts on Mars aimed at constraining the initial conditions for modeling the onset and evolution of a hydrothermal system on the red planet. Simulations of the early stages of impact cratering provide an estimate of the amount of shock melting and the pressure-temperature distribution in the target caused by various impacts on the Martian surface. Modeling of the late stage of crater collapse is necessary to characterize the final thermal state of the target, including crater uplift, and distribution of the heated target material (including the melt pool) and hot ejecta around the crater.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Zhu, Yue-Shan; Yang, Wan-Dong; Li, Xiu-Wen; Ni, Hong-Gang; Zeng, Hui
2018-02-01
The quality of indoor environments has a significant impact on public health. Usually, an indoor environment is treated as a static box, in which physicochemical reactions of indoor air contaminants are negligible. This results in conservative estimates for primary indoor air pollutant concentrations, while also ignoring secondary pollutants. Thus, understanding the relationship between indoor and outdoor particles and particle-bound pollutants is of great significance. For this reason, we collected simultaneous indoor and outdoor measurements of the size distribution of airborne brominated flame retardant (BFR) congeners. The time-dependent concentrations of indoor particles and particle-bound BFRs were then estimated with the mass balance model, accounting for the outdoor concentration, indoor source strength, infiltration, penetration, deposition and indoor resuspension. Based on qualitative observation, the size distributions of ΣPBDE and ΣHBCD were characterized by bimodal peaks. According to our results, particle-bound BDE209 and γ-HBCD underwent degradation. Regardless of the surface adsorption capability of particles and the physicochemical properties of the target compounds, the concentration of BFRs in particles of different size fractions seemed to be governed by the particle distribution. Based on our estimations, for airborne particles and particle-bound BFRs, a window-open ventilated room only takes a quarter of the time to reach an equilibrium between the concentration of pollutants inside and outside compared to a closed room. Unfortunately, indoor pollutants and outdoor pollutants always exist simultaneously, which poses a window-open-or-closed dilemma to achieve proper ventilation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Costs and effects of the Tanzanian national voucher scheme for insecticide-treated nets
Mulligan, Jo-Ann; Yukich, Joshua; Hanson, Kara
2008-01-01
Background The cost-effectiveness of insecticide-treated nets (ITNs) in reducing morbidity and mortality is well established. International focus has now moved on to how best to scale up coverage and what financing mechanisms might be used to achieve this. The approach in Tanzania has been to deliver a targeted subsidy for those most vulnerable to the effects of malaria while at the same time providing support to the development of the commercial ITN distribution system. In October 2004, with funds from the Global Fund to Fight AIDS Tuberculosis and Malaria, the government launched the Tanzania National Voucher Scheme (TNVS), a nationwide discounted voucher scheme for ITNs for pregnant women and their infants. This paper analyses the costs and effects of the scheme and compares it with other approaches to distribution. Methods Economic costs were estimated using the ingredients approach whereby all resources required in the delivery of the intervention (including the user contribution) are quantified and valued. Effects were measured in terms of number of vouchers used (and therefore nets delivered) and treated nets years. Estimates were also made for the cost per malaria case and death averted. Results and Conclusion The total financial cost of the programme represents around 5% of the Ministry of Health's total budget. The average economic cost of delivering an ITN using the voucher scheme, including the user contribution, was $7.57. The cost-effectiveness results are within the benchmarks set by other malaria prevention studies. The Government of Tanzania's approach to scaling up ITNs uses both the public and private sectors in order to achieve and sustain the level of coverage required to meet the Abuja targets. The results presented here suggest that the TNVS is a cost-effective strategy for delivering subsidized ITNs to targeted vulnerable groups. PMID:18279509
"Geo-statistics methods and neural networks in geophysical applications: A case study"
NASA Astrophysics Data System (ADS)
Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.
2008-12-01
The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.
Oblinsky, Daniel G; Vanschouwen, Bryan M B; Gordon, Heather L; Rothstein, Stuart M
2009-12-14
Given the principal component analysis (PCA) of a molecular dynamics (MD) conformational trajectory for a model protein, we perform orthogonal Procrustean rotation to "best fit" the PCA squared-loading matrix to that of a target matrix computed for a related but different molecular system. The sum of squared deviations of the elements of the rotated matrix from those of the target, known as the error of fit (EOF), provides a quantitative measure of the dissimilarity between the two conformational samples. To estimate precision of the EOF, we perform bootstrap resampling of the molecular conformations within the trajectories, generating a distribution of EOF values for the system and target. The average EOF per variable is determined and visualized to ascertain where, locally, system and target sample properties differ. We illustrate this approach by analyzing MD trajectories for the wild-type and four selected mutants of the beta1 domain of protein G.
NASA Astrophysics Data System (ADS)
Oblinsky, Daniel G.; VanSchouwen, Bryan M. B.; Gordon, Heather L.; Rothstein, Stuart M.
2009-12-01
Given the principal component analysis (PCA) of a molecular dynamics (MD) conformational trajectory for a model protein, we perform orthogonal Procrustean rotation to "best fit" the PCA squared-loading matrix to that of a target matrix computed for a related but different molecular system. The sum of squared deviations of the elements of the rotated matrix from those of the target, known as the error of fit (EOF), provides a quantitative measure of the dissimilarity between the two conformational samples. To estimate precision of the EOF, we perform bootstrap resampling of the molecular conformations within the trajectories, generating a distribution of EOF values for the system and target. The average EOF per variable is determined and visualized to ascertain where, locally, system and target sample properties differ. We illustrate this approach by analyzing MD trajectories for the wild-type and four selected mutants of the β1 domain of protein G.
NASA Astrophysics Data System (ADS)
Yu, Q. Z.; Liang, T. J.
2018-06-01
China Spallation Neutron Source (CSNS) is intended to begin operation in 2018. CSNS is an accelerator-base multidisciplinary user facility. The pulsed neutrons are produced by a 1.6GeV short-pulsed proton beam impinging on a W-Ta spallation target, at a beam power of100 kW and a repetition rate of 25 Hz. 20 neutron beam lines are extracted for the neutron scattering and neutron irradiation research. During the commissioning and maintenance scenarios, the gamma rays induced from the W-Ta target can cause the dose threat to the personal and the environment. In this paper, the gamma dose rate distributions for the W-Ta spallation are calculated, based on the engineering model of the target-moderator-reflector system. The shipping cask is analyzed to satisfy the dose rate limit that less than 2 mSv/h at the surface of the shipping cask. All calculations are performed by the Monte carlo code MCNPX2.5 and the activation code CINDER’90.
Mitochondrial Targets for Pharmacological Intervention in Human Disease
2015-01-01
Over the past several years, mitochondrial dysfunction has been linked to an increasing number of human illnesses, making mitochondrial proteins (MPs) an ever more appealing target for therapeutic intervention. With 20% of the mitochondrial proteome (312 of an estimated 1500 MPs) having known interactions with small molecules, MPs appear to be highly targetable. Yet, despite these targeted proteins functioning in a range of biological processes (including induction of apoptosis, calcium homeostasis, and metabolism), very few of the compounds targeting MPs find clinical use. Recent work has greatly expanded the number of proteins known to localize to the mitochondria and has generated a considerable increase in MP 3D structures available in public databases, allowing experimental screening and in silico prediction of mitochondrial drug targets on an unprecedented scale. Here, we summarize the current literature on clinically active drugs that target MPs, with a focus on how existing drug targets are distributed across biochemical pathways and organelle substructures. Also, we examine current strategies for mitochondrial drug discovery, focusing on genetic, proteomic, and chemogenomic assays, and relevant model systems. As cell models and screening techniques improve, MPs appear poised to emerge as relevant targets for a wide range of complex human diseases, an eventuality that can be expedited through systematic analysis of MP function. PMID:25367773
Reaction time in ankle movements: a diffusion model analysis
Michmizos, Konstantinos P.; Krebs, Hermano Igo
2015-01-01
Reaction time (RT) is one of the most commonly used measures of neurological function and dysfunction. Despite the extensive studies on it, no study has ever examined the RT in the ankle. Twenty-two subjects were recruited to perform simple, 2- and 4-choice RT tasks by visually guiding a cursor inside a rectangular target with their ankle. RT did not change with spatial accuracy constraints imposed by different target widths in the direction of the movement. RT increased as a linear function of potential target stimuli, as would be predicted by Hick–Hyman law. Although the slopes of the regressions were similar, the intercept in dorsal–plantar (DP) direction was significantly smaller than the intercept in inversion–eversion (IE) direction. To explain this difference, we used a hierarchical Bayesian estimation of the Ratcliff's (Psychol Rev 85:59, 1978) diffusion model parameters and divided processing time into cognitive components. The model gave a good account of RTs, their distribution and accuracy values, and hence provided a testimony that the non-decision processing time (overlap of posterior distributions between DP and IE < 0.045), the boundary separation (overlap of the posterior distributions < 0.1) and the evidence accumulation rate (overlap of the posterior distributions < 0.01) components of the RT accounted for the intercept difference between DP and IE. The model also proposed that there was no systematic change in non-decision processing time or drift rate when spatial accuracy constraints were altered. The results were in agreement with the memory drum hypothesis and could be further justified neurophysiologically by the larger innervation of the muscles controlling DP movements. This study might contribute to assessing deficits in sensorimotor control of the ankle and enlighten a possible target for correction in the framework of our on-going effort to develop robotic therapeutic interventions to the ankle of children with cerebral palsy. PMID:25030966
Cost-effectiveness of targeted screening for abdominal aortic aneurysm. Monte Carlo-based estimates.
Pentikäinen, T J; Sipilä, T; Rissanen, P; Soisalon-Soininen, S; Salo, J
2000-01-01
This article reports a cost-effectiveness analysis of targeted screening for abdominal aortic aneurysm (AAA). A major emphasis was on the estimation of distributions of costs and effectiveness. We performed a Monte Carlo simulation using C programming language in a PC environment. Data on survival and costs, and a majority of screening probabilities, were from our own empirical studies. Natural history data were based on the literature. Each screened male gained 0.07 life-years at an incremental cost of FIM 3,300. The expected values differed from zero very significantly. For females, expected gains were 0.02 life-years at an incremental cost of FIM 1,100, which was not statistically significant. Cost-effectiveness ratios and their 95% confidence intervals were FIM 48,000 (27,000-121,000) and 54,000 (22,000-infinity) for males and females, respectively. Sensitivity analysis revealed that the results for males were stable. Individual variation in life-year gains was high. Males seemed to benefit from targeted AAA screening, and the results were stable. As far as the cost-effectiveness ratio is considered acceptable, screening for males seemed to be justified. However, our assumptions about growth and rupture behavior of AAAs might be improved with further clinical and epidemiological studies. As a point estimate, females benefited in a similar manner, but the results were not statistically significant. The evidence of this study did not justify screening of females.
Probabilistic cost estimates for climate change mitigation.
Rogelj, Joeri; McCollum, David L; Reisinger, Andy; Meinshausen, Malte; Riahi, Keywan
2013-01-03
For more than a decade, the target of keeping global warming below 2 °C has been a key focus of the international climate debate. In response, the scientific community has published a number of scenario studies that estimate the costs of achieving such a target. Producing these estimates remains a challenge, particularly because of relatively well known, but poorly quantified, uncertainties, and owing to limited integration of scientific knowledge across disciplines. The integrated assessment community, on the one hand, has extensively assessed the influence of technological and socio-economic uncertainties on low-carbon scenarios and associated costs. The climate modelling community, on the other hand, has spent years improving its understanding of the geophysical response of the Earth system to emissions of greenhouse gases. This geophysical response remains a key uncertainty in the cost of mitigation scenarios but has been integrated with assessments of other uncertainties in only a rudimentary manner, that is, for equilibrium conditions. Here we bridge this gap between the two research communities by generating distributions of the costs associated with limiting transient global temperature increase to below specific values, taking into account uncertainties in four factors: geophysical, technological, social and political. We find that political choices that delay mitigation have the largest effect on the cost-risk distribution, followed by geophysical uncertainties, social factors influencing future energy demand and, lastly, technological uncertainties surrounding the availability of greenhouse gas mitigation options. Our information on temperature risk and mitigation costs provides crucial information for policy-making, because it clarifies the relative importance of mitigation costs, energy demand and the timing of global action in reducing the risk of exceeding a global temperature increase of 2 °C, or other limits such as 3 °C or 1.5 °C, across a wide range of scenarios.
Bhattacharya, Indranil; Manukyan, Zorayr; Chan, Phylinda; Heatherington, Anne; Harnisch, Lutz
2017-10-12
Domagrozumab, a monoclonal antibody that binds to myostatin, is being developed for Duchenne muscular dystrophy (DMD) boys following a first-in-human study in healthy adults. Literature reporting pharmacokinetic parameters of monoclonal antibodies suggested that body-weight- and body-surface-area-adjusted clearance and volume of distribution estimates between adults and children are similar for subjects older than 6 years. Population modeling identified a Michaelis-Menten binding kinetics model to optimally characterize the target mediated drug disposition profile of domagrozumab and identified body mass index on the volume of distribution as the only significant covariate. Model parameters were predicted with high-precision pharmacokinetics (clearance 1.01 × 10 -4 L/[h·kg]; central volume of distribution 457 × 10 -4 L/kg; maximum elimination rate 17.5 × 10 -4 nmol/[h·kg], Km 10.6 nmol/L) and pharmacodynamics (myostatin turnover rate 457 × 10 -4 h -1 ; complex removal rate 90 × 10 -4 h -1 ; half-saturation constant 4.32 nmol/L) and were used to predict target coverage for dosage selection in the DMD population. Additionally, allometric approaches (estimated scaling exponents (standard error) for clearance and volume were 0.81 [0.01] and 0.98 [0.02], respectively) in conjunction with a separate analysis to obtain the population mean weight and standard deviation suggested that if dosed per body weight, an only 11% difference in clearance is expected between the heaviest and lightest patient, thus preventing the need for dose adjustment. In summary, quantitative approaches were instrumental in bridging and derisking the fast-track development of domagrozumab in DMD. © 2017, The American College of Clinical Pharmacology.
Constraining ejecta particle size distributions with light scattering
NASA Astrophysics Data System (ADS)
Schauer, Martin; Buttler, William; Frayer, Daniel; Grover, Michael; Lalone, Brandon; Monfared, Shabnam; Sorenson, Daniel; Stevens, Gerald; Turley, William
2017-06-01
The angular distribution of the intensity of light scattered from a particle is strongly dependent on the particle size and can be calculated using the Mie solution to Maxwell's equations. For a collection of particles with a range of sizes, the angular intensity distribution will be the sum of the contributions from each particle size weighted by the number of particles in that size bin. The set of equations describing this pattern is not uniquely invertible, i.e. a number of different distributions can lead to the same scattering pattern, but with reasonable assumptions about the distribution it is possible to constrain the problem and extract estimates of the particle sizes from a measured scattering pattern. We report here on experiments using particles ejected by shockwaves incident on strips of triangular perturbations machined into the surface of tin targets. These measurements indicate a bimodal distribution of ejected particle sizes with relatively large particles (median radius 2-4 μm) evolved from the edges of the perturbation strip and smaller particles (median radius 200-600 nm) from the perturbations. We will briefly discuss the implications of these results and outline future plans.
Leslie, Jacqueline; Garba, Amadou; Oliva, Elisa Bosque; Barkire, Arouna; Tinni, Amadou Aboubacar; Djibo, Ali; Mounkaila, Idrissa; Fenwick, Alan
2011-01-01
Background In 2004 Niger established a large scale schistosomiasis and soil-transmitted helminths control programme targeting children aged 5–14 years and adults. In two years 4.3 million treatments were delivered in 40 districts using school based and community distribution. Method and Findings Four districts were surveyed in 2006 to estimate the economic cost per district, per treatment and per schistosomiasis infection averted. The study compares the costs of treatment at start up and in a subsequent year, identifies the allocation of costs by activity, input and organisation, and assesses the cost of treatment. The cost of delivery provided by teachers is compared to cost of delivery by community distributers (CDD). The total economic cost of the programme including programmatic, national and local government costs and international support in four study districts, over two years, was US$ 456,718; an economic cost/treatment of $0.58. The full economic delivery cost of school based treatment in 2005/06 was $0.76, and for community distribution was $0.46. Including only the programme costs the figures are $0.47 and $0.41 respectively. Differences at sub-district are more marked. This is partly explained by the fact that a CDD treats 5.8 people for every one treated in school. The range in cost effectiveness for both direct and direct and indirect treatments is quantified and the need to develop and refine such estimates is emphasised. Conclusions The relative cost effectiveness of school and community delivery differs by country according to the composition of the population treated, the numbers targeted and treated at school and in the community, the cost and frequency of training teachers and CDDs. Options analysis of technical and implementation alternatives including a financial analysis should form part of the programme design process. PMID:22022622
Simulation and Real-Time Verification of Video Algorithms on the TI C6400 Using Simulink
2004-08-20
SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12 . DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...plot estimates over time (scrolling data) Adjust detection threshold (click mouse on graph) Monitor video capture Input video frames Captured frames 12 ...Video App: Surveillance Recording 1 2 7 3 4 9 5 6 11 SL for video Explanation of GUI 12 Target Options8 Build Process 10 13 14 15 16 M-code snippet
Astrochemical Properties of Planck Cold Clumps
NASA Astrophysics Data System (ADS)
Tatematsu, Ken'ichi; Liu, Tie; Ohashi, Satoshi; Sanhueza, Patricio; Nguyen Lu'o'ng, Quang; Hirota, Tomoya; Liu, Sheng-Yuan; Hirano, Naomi; Choi, Minho; Kang, Miju; Thompson, Mark A.; Fuller, Gary; Wu, Yuefang; Li, Di; Di Francesco, James; Kim, Kee-Tae; Wang, Ke; Ristorcelli, Isabelle; Juvela, Mika; Shinnaga, Hiroko; Cunningham, Maria; Saito, Masao; Lee, Jeong-Eun; Tóth, L. Viktor; He, Jinhua; Sakai, Takeshi; Kim, Jungha; JCMT Large Program "SCOPE" Collaboration; TRAO Key Science Program "TOP" Collaboration
2017-02-01
We observed 13 Planck cold clumps with the James Clerk Maxwell Telescope/SCUBA-2 and with the Nobeyama 45 m radio telescope. The N2H+ distribution obtained with the Nobeyama telescope is quite similar to SCUBA-2 dust distribution. The 82 GHz HC3N, 82 GHz CCS, and 94 GHz CCS emission are often distributed differently with respect to the N2H+ emission. The CCS emission, which is known to be abundant in starless molecular cloud cores, is often very clumpy in the observed targets. We made deep single-pointing observations in DNC, HN13C, N2D+, and cyclic-C3H2 toward nine clumps. The detection rate of N2D+ is 50%. Furthermore, we observed the NH3 emission toward 15 Planck cold clumps to estimate the kinetic temperature, and confirmed that most targets are cold (≲20 K). In two of the starless clumps we observed, the CCS emission is distributed as it surrounds the N2H+ core (chemically evolved gas), which resembles the case of L1544, a prestellar core showing collapse. In addition, we detected both DNC and N2D+. These two clumps are most likely on the verge of star formation. We introduce the chemical evolution factor (CEF) for starless cores to describe the chemical evolutionary stage, and analyze the observed Planck cold clumps.
Kip, Anke E; Castro, María Del Mar; Gomez, Maria Adelaida; Cossio, Alexandra; Schellens, Jan H M; Beijnen, Jos H; Saravia, Nancy Gore; Dorlo, Thomas P C
2018-05-10
Leishmania parasites reside within macrophages and the direct target of antileishmanial drugs is therefore intracellular. We aimed to characterize the intracellular PBMC miltefosine kinetics by developing a population pharmacokinetic (PK) model simultaneously describing plasma and intracellular PBMC pharmacokinetics. Furthermore, we explored exposure-response relationships and simulated alternative dosing regimens. A population PK model was developed with NONMEM, based on 339 plasma and 194 PBMC miltefosine concentrations from Colombian cutaneous leishmaniasis patients [29 children (2-12 years old) and 22 adults] receiving 1.8-2.5 mg/kg/day miltefosine for 28 days. A three-compartment model with miltefosine distribution into an intracellular PBMC effect compartment best fitted the data. Intracellular PBMC distribution was described with an intracellular-to-plasma concentration ratio of 2.17 [relative standard error (RSE) 4.9%] and intracellular distribution rate constant of 1.23 day-1 (RSE 14%). In exploring exposure-response relationships, both plasma and intracellular model-based exposure estimates significantly influenced probability of cure. A proposed PK target for the area under the plasma concentration-time curve (day 0-28) of >535 mg·day/L corresponded to >95% probability of cure. In linear dosing simulations, 18.3% of children compared with 2.8% of adults failed to reach 535 mg·day/L. In children, this decreased to 1.8% after allometric dosing simulation. The developed population PK model described the rate and extent of miltefosine distribution from plasma into PBMCs. Miltefosine exposure was significantly related to probability of cure in this cutaneous leishmaniasis patient population. We propose an exploratory PK target, which should be validated in a larger cohort study.
Monte Carlo dose distribution calculation at nuclear level for Auger-emitting radionuclide energies.
Di Maria, S; Belchior, A; Romanets, Y; Paulo, A; Vaz, P
2018-05-01
The distribution of radiopharmaceuticals in tumor cells represents a fundamental aspect for a successful molecular targeted radiotherapy. It was largely demonstrated at microscopic level that only a fraction of cells in tumoral tissues incorporate the radiolabel. In addition, the distribution of the radionuclides at sub-cellular level, namely inside each nucleus, should also be investigated for accurate dosimetry estimation. The most used method to perform cellular dosimetry is the MIRD one, where S-values are able to estimate cellular absorbed doses for several electron energies, nucleus diameters, and considering homogeneous source distributions. However the radionuclide distribution inside nuclei can be also highly non-homogeneous. The aim of this study is to show in what extent a non-accurate cellular dosimetry could lead to misinterpretations of surviving cell fraction vs dose relationship; in this context, a dosimetric case study with 99m Tc is also presented. The state-of-art MCNP6 Monte Carlo simulation was used in order to model cell structures both in MIRD geometry (MG) and MIRD modified geometries (MMG), where also entire mitotic chromosome volumes were considered (each structure was modeled as liquid water material). In order to simulate a wide energy range of Auger emitting radionuclides, four mono energetic electron emissions were considered, namely 213eV, 6keV, 11keV and 20keV. A dosimetric calculation for 99m Tc undergoing inhomogeneous nuclear internalization was also performed. After a successful validation step between MIRD and our computed S-values for three Auger-emitting radionuclides ( 99m Tc, 125 I and 64 Cu), absorbed dose results showed that the standard MG could differ from the MMG from one to three orders of magnitude. These results were also confirmed by considering the 99m Tc spectrum emission (Auger and internal conversion electrons). Moreover, considering an inhomogeneous radionuclide distribution, the average electron energy that maximizes the absorbed dose was found to be different for MG and MMG. The modeling of realistic radionuclide localization inside cells, including a inhomogeneous nuclear distribution, revealed that i) a strong bias in surviving cell fraction vs dose relationships (taking to different radiobiological models) can arise; ii) the alternative models might contribute to a more accurate prediction of the radiobiological effects inherent to more specific molecular targeted radiotherapy strategies. Copyright © 2018 Elsevier Ltd. All rights reserved.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
2014-01-01
Background Tick-borne diseases (TBDs) present a major economic burden to communities across East Africa. Farmers in East Africa must use acaracides to target ticks and prevent transmission of tick-borne diseases such as anaplasmosis, babesiosis, cowdriosis and theileriosis; the major causes of cattle mortality and morbidity. The costs of controlling East Coast Fever (ECF), caused by Theileria parva, in Uganda are significant and measures taken to control ticks, to be cost-effective, should take into account the burden of disease. The aim of the present work was to estimate the burden presented by T. parva and its spatial distribution in a crop-livestock production system in Eastern Uganda. Methods A cross sectional study was carried out to determine the prevalence and spatial distribution of T. parva in Tororo District, Uganda. Blood samples were taken from all cattle (n: 2,658) in 22 randomly selected villages across Tororo District from September to December 2011. Samples were analysed by PCR and T. parva prevalence and spatial distribution determined. Results The overall prevalence of T. parva was found to be 5.3%. Herd level prevalence ranged from 0% to 21% with majority of the infections located in the North, North-Eastern and South-Eastern parts of Tororo District. No statistically significant differences in risk of infection were found between age classes, sex and cattle breed. Conclusions T. parva infection is widely distributed in Tororo District, Uganda. The prevalence and distribution of T. parva is most likely determined by spatial distribution of R. appendiculatus, restricted grazing of calves and preferential tick control targeting draft animals. PMID:24589227
Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.
2010-01-01
The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.
2010-01-01
Background In Uganda, long-lasting insecticidal nets (LLIN) have been predominantly delivered through two public sector channels: targeted campaigns or routine antenatal care (ANC) services. Their combination in a mixed-model strategy is being advocated to quickly increase LLIN coverage and maintain it over time, but there is little evidence on the efficiency of each system. This study evaluated the two delivery channels regarding LLIN retention and use, and estimated the associated costs, to contribute towards the evidence-base on LLIN delivery channels in Uganda. Methods Household surveys were conducted 5-7 months after LLIN distribution, combining questionnaires with visual verification of LLIN presence. Focus groups and interviews were conducted to further investigate determinants of LLIN retention and use. Campaign distribution was evaluated in Jinja and Adjumani while ANC distribution was evaluated only in the latter district. Costs were calculated from the provider perspective through retrospective analysis of expenditure data, and effects were estimated as cost per LLIN delivered and cost per treated-net-year (TNY). These effects were calculated for the total number of LLINs delivered and for those retained and used. Results After 5-7 months, over 90% of LLINs were still owned by recipients, and between 74% (Jinja) and 99% (ANC Adjumani) were being used. Costing results showed that delivery was cheapest for the campaign in Jinja and highest for the ANC channel, with economic delivery cost per net retained and used of USD 1.10 and USD 2.31, respectively. Financial delivery costs for the two channels were similar in the same location, USD 1.04 for campaign or USD 1.07 for ANC delivery in Adjumani, but differed between locations (USD 0.67 for campaign delivery in Jinja). Economic cost for ANC distribution were considerably higher (USD 2.27) compared to campaign costs (USD 1.23) in Adjumani. Conclusions Targeted campaigns and routine ANC services can both achieve high LLIN retention and use among the target population. The comparatively higher economic cost of delivery through ANC facilities was at least partially due to the relatively short time this system had been in existence. Further studies comparing the cost of well-established ANC delivery with LLIN campaigns and other delivery channels are thus encouraged. PMID:20406448
Kolaczinski, Jan H; Kolaczinski, Kate; Kyabayinze, Daniel; Strachan, Daniel; Temperley, Matilda; Wijayanandana, Nayantara; Kilian, Albert
2010-04-20
In Uganda, long-lasting insecticidal nets (LLIN) have been predominantly delivered through two public sector channels: targeted campaigns or routine antenatal care (ANC) services. Their combination in a mixed-model strategy is being advocated to quickly increase LLIN coverage and maintain it over time, but there is little evidence on the efficiency of each system. This study evaluated the two delivery channels regarding LLIN retention and use, and estimated the associated costs, to contribute towards the evidence-base on LLIN delivery channels in Uganda. Household surveys were conducted 5-7 months after LLIN distribution, combining questionnaires with visual verification of LLIN presence. Focus groups and interviews were conducted to further investigate determinants of LLIN retention and use. Campaign distribution was evaluated in Jinja and Adjumani while ANC distribution was evaluated only in the latter district. Costs were calculated from the provider perspective through retrospective analysis of expenditure data, and effects were estimated as cost per LLIN delivered and cost per treated-net-year (TNY). These effects were calculated for the total number of LLINs delivered and for those retained and used. After 5-7 months, over 90% of LLINs were still owned by recipients, and between 74% (Jinja) and 99% (ANC Adjumani) were being used. Costing results showed that delivery was cheapest for the campaign in Jinja and highest for the ANC channel, with economic delivery cost per net retained and used of USD 1.10 and USD 2.31, respectively. Financial delivery costs for the two channels were similar in the same location, USD 1.04 for campaign or USD 1.07 for ANC delivery in Adjumani, but differed between locations (USD 0.67 for campaign delivery in Jinja). Economic cost for ANC distribution were considerably higher (USD 2.27) compared to campaign costs (USD 1.23) in Adjumani. Targeted campaigns and routine ANC services can both achieve high LLIN retention and use among the target population. The comparatively higher economic cost of delivery through ANC facilities was at least partially due to the relatively short time this system had been in existence. Further studies comparing the cost of well-established ANC delivery with LLIN campaigns and other delivery channels are thus encouraged.
Pelekis, Michael; Nicolich, Mark J; Gauthier, Joseph S
2003-12-01
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.
Hontelez, Jan A. C.; Bakker, Roel; Blok, David J.; Cai, Rui; Houweling, Tanja A. J.; Kulik, Margarete C.; Lenk, Edeltraud J.; Luyendijk, Marianne; Matthijsse, Suzette M.; Redekop, William K.; Wagenaar, Inge; Jacobson, Julie; Nagelkerke, Nico J. D.; Richardus, Jan H.
2016-01-01
Background The London Declaration (2012) was formulated to support and focus the control and elimination of ten neglected tropical diseases (NTDs), with targets for 2020 as formulated by the WHO Roadmap. Five NTDs (lymphatic filariasis, onchocerciasis, schistosomiasis, soil-transmitted helminths and trachoma) are to be controlled by preventive chemotherapy (PCT), and four (Chagas’ disease, human African trypanosomiasis, leprosy and visceral leishmaniasis) by innovative and intensified disease management (IDM). Guinea worm, virtually eradicated, is not considered here. We aim to estimate the global health impact of meeting these targets in terms of averted morbidity, mortality, and disability adjusted life years (DALYs). Methods The Global Burden of Disease (GBD) 2010 study provides prevalence and burden estimates for all nine NTDs in 1990 and 2010, by country, age and sex, which were taken as the basis for our calculations. Estimates for other years were obtained by interpolating between 1990 (or the start-year of large-scale control efforts) and 2010, and further extrapolating until 2030, such that the 2020 targets were met. The NTD disease manifestations considered in the GBD study were analyzed as either reversible or irreversible. Health impacts were assessed by comparing the results of achieving the targets with the counterfactual, construed as the health burden had the 1990 (or 2010 if higher) situation continued unabated. Principle Findings/Conclusions Our calculations show that meeting the targets will lead to about 600 million averted DALYs in the period 2011–2030, nearly equally distributed between PCT and IDM-NTDs, with the health gain amongst PCT-NTDs mostly (96%) due to averted disability and amongst IDM-NTDs largely (95%) from averted mortality. These health gains include about 150 million averted irreversible disease manifestations (e.g. blindness) and 5 million averted deaths. Control of soil-transmitted helminths accounts for one third of all averted DALYs. We conclude that the projected health impact of the London Declaration justifies the required efforts. PMID:26890362
Ge, Xuezhen; He, Shanyong; Wang, Tao; Yan, Wei; Zong, Shixiang
2015-01-01
As the primary pest of palm trees, Rhynchophorus ferrugineus (Olivier) (Coleoptera: Curculionidae) has caused serious harm to palms since it first invaded China. The present study used CLIMEX 1.1 to predict the potential distribution of R. ferrugineus in China according to both current climate data (1981-2010) and future climate warming estimates based on simulated climate data for the 2020s (2011-2040) provided by the Tyndall Center for Climate Change Research (TYN SC 2.0). Additionally, the Ecoclimatic Index (EI) values calculated for different climatic conditions (current and future, as simulated by the B2 scenario) were compared. Areas with a suitable climate for R. ferrugineus distribution were located primarily in central China according to the current climate data, with the northern boundary of the distribution reaching to 40.1°N and including Tibet, north Sichuan, central Shaanxi, south Shanxi, and east Hebei. There was little difference in the potential distribution predicted by the four emission scenarios according to future climate warming estimates. The primary prediction under future climate warming models was that, compared with the current climate model, the number of highly favorable habitats would increase significantly and expand into northern China, whereas the number of both favorable and marginally favorable habitats would decrease. Contrast analysis of EI values suggested that climate change and the density of site distribution were the main effectors of the changes in EI values. These results will help to improve control measures, prevent the spread of this pest, and revise the targeted quarantine areas.
Ge, Xuezhen; He, Shanyong; Wang, Tao; Yan, Wei; Zong, Shixiang
2015-01-01
As the primary pest of palm trees, Rhynchophorus ferrugineus (Olivier) (Coleoptera: Curculionidae) has caused serious harm to palms since it first invaded China. The present study used CLIMEX 1.1 to predict the potential distribution of R. ferrugineus in China according to both current climate data (1981–2010) and future climate warming estimates based on simulated climate data for the 2020s (2011–2040) provided by the Tyndall Center for Climate Change Research (TYN SC 2.0). Additionally, the Ecoclimatic Index (EI) values calculated for different climatic conditions (current and future, as simulated by the B2 scenario) were compared. Areas with a suitable climate for R. ferrugineus distribution were located primarily in central China according to the current climate data, with the northern boundary of the distribution reaching to 40.1°N and including Tibet, north Sichuan, central Shaanxi, south Shanxi, and east Hebei. There was little difference in the potential distribution predicted by the four emission scenarios according to future climate warming estimates. The primary prediction under future climate warming models was that, compared with the current climate model, the number of highly favorable habitats would increase significantly and expand into northern China, whereas the number of both favorable and marginally favorable habitats would decrease. Contrast analysis of EI values suggested that climate change and the density of site distribution were the main effectors of the changes in EI values. These results will help to improve control measures, prevent the spread of this pest, and revise the targeted quarantine areas. PMID:26496438
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
Thompson, R.S.; Anderson, K.H.; Bartlein, P.J.
2008-01-01
The method of modern analogs is widely used to obtain estimates of past climatic conditions from paleobiological assemblages, and despite its frequent use, this method involved so-far untested assumptions. We applied four analog approaches to a continental-scale set of bioclimatic and plant-distribution presence/absence data for North America to assess how well this method works under near-optimal modern conditions. For each point on the grid, we calculated the similarity between its vegetation assemblage and those of all other points on the grid (excluding nearby points). The climate of the points with the most similar vegetation was used to estimate the climate at the target grid point. Estimates based the use of the Jaccard similarity coefficient had smaller errors than those based on the use of a new similarity coefficient, although the latter may be more robust because it does not assume that the "fossil" assemblage is complete. The results of these analyses indicate that presence/absence vegetation assemblages provide a valid basis for estimating bioclimates on the continental scale. However, the accuracy of the estimates is strongly tied to the number of species in the target assemblage, and the analog method is necessarily constrained to produce estimates that fall within the range of observed values. We applied the four modern analog approaches and the mutual overlap (or "mutual climatic range") method to estimate bioclimatic conditions represented by the plant macrofossil assemblage from a packrat midden of Last Glacial Maximum age from southern Nevada. In general, the estimation approaches produced similar results in regard to moisture conditions, but there was a greater range of estimates for growing-degree days. Despite its limitations, the modern analog technique can provide paleoclimatic reconstructions that serve as the starting point to the interpretation of past climatic conditions.
NASA Technical Reports Server (NTRS)
Redemann, J.; Livingston, J.; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; LeBlanc, S.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.;
2014-01-01
We have developed a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. We compare the spatio-temporal distribution of our multi-sensor aerosol retrievals and calculations of seasonal clear-sky aerosol radiative forcing based on the aerosol retrievals to values derived from four models that participated in the latest AeroCom model intercomparison initiative. We find significant inter-model differences, in particular for the aerosol single scattering albedo, which can be evaluated using the multi-sensor A-Train retrievals. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.
Effects of linking a soil-water-balance model with a groundwater-flow model
Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.
2013-01-01
A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.
Unveiling the nucleon tensor charge at Jefferson Lab: A study of the SoLID case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Zhihong; Sato, Nobuo; Allada, Kalyan
© 2017 The Authors Future experiments at the Jefferson Lab 12 GeV upgrade, in particular, the Solenoidal Large Intensity Device (SoLID), aim at a very precise data set in the region where the partonic structure of the nucleon is dominated by the valence quarks. One of the main goals is to constrain the quark transversity distributions. We apply recent theoretical advances of the global QCD extraction of the transversity distributions to study the impact of future experimental data from the SoLID experiments. Especially, we develop a simple strategy based on the Hessian matrix analysis that allows one to estimate themore » uncertainties of the transversity quark distributions and their tensor charges extracted from SoLID data simulation. We find that the SoLID measurements with the proton and the effective neutron targets can improve the precision of the u- and d-quark transversity distributions up to one order of magnitude in the range 0.05 < x < 0.6.« less
Unveiling the nucleon tensor charge at Jefferson Lab: A study of the SoLID case
NASA Astrophysics Data System (ADS)
Ye, Zhihong; Sato, Nobuo; Allada, Kalyan; Liu, Tianbo; Chen, Jian-Ping; Gao, Haiyan; Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2017-04-01
Future experiments at the Jefferson Lab 12 GeV upgrade, in particular, the Solenoidal Large Intensity Device (SoLID), aim at a very precise data set in the region where the partonic structure of the nucleon is dominated by the valence quarks. One of the main goals is to constrain the quark transversity distributions. We apply recent theoretical advances of the global QCD extraction of the transversity distributions to study the impact of future experimental data from the SoLID experiments. Especially, we develop a simple strategy based on the Hessian matrix analysis that allows one to estimate the uncertainties of the transversity quark distributions and their tensor charges extracted from SoLID data simulation. We find that the SoLID measurements with the proton and the effective neutron targets can improve the precision of the u- and d-quark transversity distributions up to one order of magnitude in the range 0.05 < x < 0.6.
Unveiling the nucleon tensor charge at Jefferson Lab: A study of the SoLID case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Zhihong; Sato, Nobuo; Allada, Kalyan
2017-01-27
Here, future experiments at the Jefferson Lab 12 GeV upgrade, in particular, the Solenoidal Large Intensity Device (SoLID), aim at a very precise data set in the region where the partonic structure of the nucleon is dominated by the valence quarks. One of the main goals is to constrain the transversity quark distributions. We apply recent theoretical advances of the global QCD extraction of the transversity distributions to study the impact of future experimental data from the SoLID. Especially, we develop a model-independent method based on the hessian matrix analysis that allows to estimate the uncertainties of the transversity quarkmore » distributions and their tensor charge contributions extracted from the pseudo-data for the SoLID. Both u and d-quark transversity distributions are shown to be very well constrained in the kinematical region of the future experiments with the proton and the effective neutron targets.« less
Flexible nonlinear estimates of the association between height and mental ability in early life.
Murasko, Jason E
2014-01-01
To estimate associations between early-life mental ability and height/height-growth in contemporary US children. Structured additive regression models are used to flexibly estimate the associations between height and mental ability at approximately 24 months of age. The sample is taken from the Early Childhood Longitudinal Study-Birth Cohort, a national study whose target population was children born in the US during 2001. A nonlinear association is indicated between height and mental ability at approximately 24 months of age. There is an increasing association between height and mental ability below the mean value of height, but a flat association thereafter. Annualized growth shows the same nonlinear association to ability when controlling for baseline length at 9 months. Restricted growth at lower values of the height distribution is associated with lower measured mental ability in contemporary US children during the first years of life. Copyright © 2013 Wiley Periodicals, Inc.
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Iwasaki, Ryosuke; Takagi, Ryo; Tomiyasu, Kentaro; Yoshizawa, Shin; Umemura, Shin-ichiro
2017-07-01
The targeting of the ultrasound beam and the prediction of thermal lesion formation in advance are the requirements for monitoring high-intensity focused ultrasound (HIFU) treatment with safety and reproducibility. To visualize the HIFU focal zone, we utilized an acoustic radiation force impulse (ARFI) imaging-based method. After inducing displacements inside tissues with pulsed HIFU called the push pulse exposure, the distribution of axial displacements started expanding and moving. To acquire RF data immediately after and during the HIFU push pulse exposure to improve prediction accuracy, we attempted methods using extrapolation estimation and applying HIFU noise elimination. The distributions going back in the time domain from the end of push pulse exposure are in good agreement with tissue coagulation at the center. The results suggest that the proposed focal zone visualization employing pulsed HIFU entailing the high-speed ARFI imaging method is useful for the prediction of thermal coagulation in advance.
An approach to constrained aerodynamic design with application to airfoils
NASA Technical Reports Server (NTRS)
Campbell, Richard L.
1992-01-01
An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.
Retention assessment of magnetic nanoparticles in rat arteries with micro-computed tomography
NASA Astrophysics Data System (ADS)
Tu, Shu-Ju; Wu, Siao-Yun; Wang, Fu-Sheng; Ma, Yunn-Hwa
2014-03-01
Magnetic nanoparticles (MNPs) may serve as carriers for pharmacological agents to the target in a magnetic-force guiding system. It is essential to achieve effective retention of MNPs through the external magnet placement. However, it is difficult to estimate the retention efficiency of MNPs and validate the experimental strategies. Micro-CT was used to identify the spatial distribution of MNP retention and image analysis is then extended to evaluate the MNP delivery efficiency. Male Sprague Dawley rats were anesthetized to expose abdominal arteries with an NdFeB magnet of 4.9 kG placed by the left iliac artery. After a 20 min equilibrium period, arteries were ligated, removed and fixed in a paraformaldehyde solution. Experiments were performed with intravenous injection in our platform with two independent groups. MNPs were used in the first group, while chemical compounds of recombinant tissue plaminogen activator were attached to MNPs as rtPA (recombinant tissue plaminogen activator)-MNPs in the second group. Image analysis of micro-CT shows the average retention volume of MNPs and rtPA-MNPs in the left iliac arteries is 9.3 and 6.3 fold of that in the right. Large local aggregation of MNPs and rtPA-MNPs in the left iliac arteries is the consequence of external magnet placement, suggesting feasibility of magnetic targeting through the intravenous administration. We also determined that on average 0.57% and 0.064% of MNPs and rtPA-MNPs respectively were retained in the left iliac artery. It was estimated that the average rtPA concentration of 60.16 µg mL-1 may be achieved with rtPA-MNPs. With the micro-CT imaging approach, we accomplished visualization of the aggregation of retained particles; reconstructed 3D distribution of relative retention; estimated the average particle number of local retention; determined efficiency of targeted delivery. In particular, our quantitative image assessment suggests that intravenous administration of rtPA-MNPs may retain local concentration of rtPA high enough to induce thrombolysis.
On the interplay effects with proton scanning beams in stage III lung cancer.
Li, Yupeng; Kardar, Laleh; Li, Xiaoqiang; Li, Heng; Cao, Wenhua; Chang, Joe Y; Liao, Li; Zhu, Ronald X; Sahoo, Narayan; Gillin, Michael; Liao, Zhongxing; Komaki, Ritsuko; Cox, James D; Lim, Gino; Zhang, Xiaodong
2014-02-01
To assess the dosimetric impact of interplay between spot-scanning proton beam and respiratory motion in intensity-modulated proton therapy (IMPT) for stage III lung cancer. Eleven patients were sampled from 112 patients with stage III nonsmall cell lung cancer to well represent the distribution of 112 patients in terms of target size and motion. Clinical target volumes (CTVs) and planning target volumes (PTVs) were defined according to the authors' clinical protocol. Uniform and realistic breathing patterns were considered along with regular- and hypofractionation scenarios. The dose contributed by a spot was fully calculated on the computed tomography (CT) images corresponding to the respiratory phase that the spot is delivered, and then accumulated to the reference phase of the 4DCT to generate the dynamic dose that provides an estimation of what might be delivered under the influence of interplay effect. The dynamic dose distributions at different numbers of fractions were compared with the corresponding 4D composite dose which is the equally weighted average of the doses, respectively, computed on respiratory phases of a 4DCT image set. Under regular fractionation, the average and maximum differences in CTV coverage between the 4D composite and dynamic doses after delivery of all 35 fractions were no more than 0.2% and 0.9%, respectively. The maximum differences between the two dose distributions for the maximum dose to the spinal cord, heart V40, esophagus V55, and lung V20 were 1.2 Gy, 0.1%, 0.8%, and 0.4%, respectively. Although relatively large differences in single fraction, correlated with small CTVs relative to motions, were observed, the authors' biological response calculations suggested that this interfractional dose variation may have limited biological impact. Assuming a hypofractionation scenario, the differences between the 4D composite and dynamic doses were well confined even for single fraction. Despite the presence of interplay effect, the delivered dose may be reliably estimated using the 4D composite dose. In general the interplay effect may not be a primary concern with IMPT for lung cancers for the authors' institution. The described interplay analysis tool may be used to provide additional confidence in treatment delivery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson Tedgren, A; Persson, M; Nilsson, J
Purpose: To retrospectively re-calculate dose distributions for selected head and neck cancer patients, earlier treated with HDR 192Ir brachytherapy, using Monte Carlo (MC) simulations and compare results to distributions from the planning system derived using TG43 formalism. To study differences between dose to medium (as obtained with the MC code) and dose to water in medium as obtained through (1) ratios of stopping powers and (2) ratios of mass energy absorption coefficients between water and medium. Methods: The MC code Algebra was used to calculate dose distributions according to earlier actual treatment plans using anonymized plan data and CT imagesmore » in DICOM format. Ratios of stopping power and mass energy absorption coefficients for water with various media obtained from 192-Ir spectra were used in toggling between dose to water and dose to media. Results: Differences between initial planned TG43 dose distributions and the doses to media calculated by MC are insignificant in the target volume. Differences are moderate (within 4–5 % at distances of 3–4 cm) but increase with distance and are most notable in bone and at the patient surface. Differences between dose to water and dose to medium are within 1-2% when using mass energy absorption coefficients to toggle between the two quantities but increase to above 10% for bone using stopping power ratios. Conclusion: MC predicts target doses for head and neck cancer patients in close agreement with TG43. MC yields improved dose estimations outside the target where a larger fraction of dose is from scattered photons. It is important with awareness and a clear reporting of absorbed dose values in using model based algorithms. Differences in bone media can exceed 10% depending on how dose to water in medium is defined.« less
Paul, Sabyasachi; Sahoo, G S; Tripathy, S P; Sharma, S C; Ramjilal; Ninawe, N G; Sunil, C; Gupta, A K; Bandyopadhyay, T
2014-06-01
A systematic study on the measurement of neutron spectra emitted from the interaction of protons of various energies with a thick beryllium target has been carried out. The measurements were carried out in the forward direction (at 0° with respect to the direction of protons) using CR-39 detectors. The doses were estimated using the in-house image analyzing program autoTRAK_n, which works on the principle of luminosity variation in and around the track boundaries. A total of six different proton energies starting from 4 MeV to 24 MeV with an energy gap of 4 MeV were chosen for the study of the neutron yields and the estimation of doses. Nearly, 92% of the recoil tracks developed after chemical etching were circular in nature, but the size distributions of the recoil tracks were not found to be linearly dependent on the projectile energy. The neutron yield and dose values were found to be increasing linearly with increasing projectile energies. The response of CR-39 detector was also investigated at different beam currents at two different proton energies. A linear increase of neutron yield with beam current was observed.
NASA Astrophysics Data System (ADS)
Coene, A.; Crevecoeur, G.; Dupré, L.; Vaes, P.
2013-06-01
In recent years, magnetic nanoparticles (MNPs) have gained increased attention due to their superparamagnetic properties. These properties allow the development of innovative biomedical applications such as targeted drug delivery and tumour heating. However, these modalities lack effective operation arising from the inaccurate quantification of the spatial MNP distribution. This paper proposes an approach for assessing the one-dimensional (1D) MNP distribution using electron paramagnetic resonance (EPR). EPR is able to accurately determine the MNP concentration in a single volume but not the MNP distribution throughout this volume. A new approach that exploits the solution of inverse problems for the correct interpretation of the measured EPR signals, is investigated. We achieve reconstruction of the 1D distribution of MNPs using EPR. Furthermore, the impact of temperature control on the reconstructed distributions is analysed by comparing two EPR setups where the latter setup is temperature controlled. Reconstruction quality for the temperature-controlled setup increases with an average of 5% and with a maximum increase of 13% for distributions with relatively lower iron concentrations and higher resolutions. However, these measurements are only a validation of our new method and form no hard limits.
Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis
Heard, Nicholas A.; Turcotte, Melissa J. M.
2016-05-21
Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less
Zhu, Zhengfei; Liu, Wei; Gillin, Michael; Gomez, Daniel R; Komaki, Ritsuko; Cox, James D; Mohan, Radhe; Chang, Joe Y
2014-05-06
We assessed the robustness of passive scattering proton therapy (PSPT) plans for patients in a phase II trial of PSPT for stage III non-small cell lung cancer (NSCLC) by using the worst-case scenario method, and compared the worst-case dose distributions with the appearance of locally recurrent lesions. Worst-case dose distributions were generated for each of 9 patients who experienced recurrence after concurrent chemotherapy and PSPT to 74 Gy(RBE) for stage III NSCLC by simulating and incorporating uncertainties associated with set-up, respiration-induced organ motion, and proton range in the planning process. The worst-case CT scans were then fused with the positron emission tomography (PET) scans to locate the recurrence. Although the volumes enclosed by the prescription isodose lines in the worst-case dose distributions were consistently smaller than enclosed volumes in the nominal plans, the target dose coverage was not significantly affected: only one patient had a recurrence outside the prescription isodose lines in the worst-case plan. PSPT is a relatively robust technique. Local recurrence was not associated with target underdosage resulting from estimated uncertainties in 8 of 9 cases.
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Long-term cost-effectiveness of disease management in systolic heart failure.
Miller, George; Randolph, Stephen; Forkner, Emma; Smith, Brad; Galbreath, Autumn Dawn
2009-01-01
Although congestive heart failure (CHF) is a primary target for disease management programs, previous studies have generated mixed results regarding the effectiveness and cost savings of disease management when applied to CHF. We estimated the long-term impact of systolic heart failure disease management from the results of an 18-month clinical trial. We used data generated from the trial (starting population distributions, resource utilization, mortality rates, and transition probabilities) in a Markov model to project results of continuing the disease management program for the patients' lifetimes. Outputs included distribution of illness severity, mortality, resource consumption, and the cost of resources consumed. Both cost and effectiveness were discounted at a rate of 3% per year. Cost-effectiveness was computed as cost per quality-adjusted life year (QALY) gained. Model results were validated against trial data and indicated that, over their lifetimes, patients experienced a lifespan extension of 51 days. Combined discounted lifetime program and medical costs were $4850 higher in the disease management group than the control group, but the program had a favorable long-term discounted cost-effectiveness of $43,650/QALY. These results are robust to assumptions regarding mortality rates, the impact of aging on the cost of care, the discount rate, utility values, and the targeted population. Estimation of the clinical benefits and financial burden of disease management can be enhanced by model-based analyses to project costs and effectiveness. Our results suggest that disease management of heart failure patients can be cost-effective over the long term.
NASA Astrophysics Data System (ADS)
Chatzidakis, S.; Choi, C. K.; Tsoukalas, L. H.
2016-08-01
The potential non-proliferation monitoring of spent nuclear fuel sealed in dry casks interacting continuously with the naturally generated cosmic ray muons is investigated. Treatments on the muon RMS scattering angle by Moliere, Rossi-Greisen, Highland and, Lynch-Dahl were analyzed and compared with simplified Monte Carlo simulations. The Lynch-Dahl expression has the lowest error and appears to be appropriate when performing conceptual calculations for high-Z, thick targets such as dry casks. The GEANT4 Monte Carlo code was used to simulate dry casks with various fuel loadings and scattering variance estimates for each case were obtained. The scattering variance estimation was shown to be unbiased and using Chebyshev's inequality, it was found that 106 muons will provide estimates of the scattering variances that are within 1% of the true value at a 99% confidence level. These estimates were used as reference values to calculate scattering distributions and evaluate the asymptotic behavior for small variations on fuel loading. It is shown that the scattering distributions between a fully loaded dry cask and one with a fuel assembly missing initially overlap significantly but their distance eventually increases with increasing number of muons. One missing fuel assembly can be distinguished from a fully loaded cask with a small overlapping between the distributions which is the case of 100,000 muons. This indicates that the removal of a standard fuel assembly can be identified using muons providing that enough muons are collected. A Bayesian algorithm was developed to classify dry casks and provide a decision rule that minimizes the risk of making an incorrect decision. The algorithm performance was evaluated and the lower detection limit was determined.
Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.
Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni
2018-06-15
Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.
Luminescence imaging of water during proton-beam irradiation for range estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka
Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantomsmore » of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy.« less
Meditations on birth weight: is it better to reduce the variance or increase the mean?
Haig, David
2003-07-01
A conceptual model is presented here in which the birth weight distribution is decomposed into a distribution of target weights and a distribution of perturbations from the target. The target weight is the adaptive goal of fetal development. In the simplest model, perinatal mortality is independent of variation in target weight and determined solely by the magnitude of the perturbation of birth weight from the target. In this model, mortality risk is concentrated in the tails of the birth weight distribution. A difference between populations in their distributions of target weights will be associated with a corresponding shift in their curves of weight-specific risk, without any difference between the populations in overall risk. In this model, risk would be reduced by decreasing the variance of the distribution of perturbations. The model is discussed in the context of the so-called "paradoxes of low birth weight."
Ito, Toshihiro; Kitajima, Masaaki; Kato, Tsuyoshi; Ishii, Satoshi; Segawa, Takahiro; Okabe, Satoshi; Sano, Daisuke
2017-11-15
Multiple-barriers are widely employed for managing microbial risks in water reuse, in which different types of wastewater treatment units (biological treatment, disinfection, etc.) and health protection measures (use of personal protective gear, vegetable washing, etc.) are combined to achieve a performance target value of log 10 reduction (LR) of viruses. The LR virus target value needs to be calculated based on the data obtained from monitoring the viruses of concern and the water reuse scheme in the context of the countries/regions where water reuse is implemented. In this study, we calculated the virus LR target values under two exposure scenarios for reclaimed wastewater irrigation in Japan, using the concentrations of indigenous viruses in untreated wastewater and a defined tolerable annual disease burden (10 -4 or 10 -6 disability-adjusted life years per person per year (DALY pppy )). Three genogroups of norovirus (norovirus genogroup I (NoV GI), geogroup II (NoV GII), and genogroup IV (NoV GIV)) in untreated wastewater were quantified as model viruses using reverse transcription-microfluidic quantitative PCR, and only NoV GII was present in quantifiable concentration. The probabilistic distribution of NoV GII concentration in untreated wastewater was then estimated from its concentration dataset, and used to calculate the LR target values of NoV GII for wastewater treatment. When an accidental ingestion of reclaimed wastewater by Japanese farmers was assumed, the NoV GII LR target values corresponding to the tolerable annual disease burden of 10 -6 DALY pppy were 3.2, 4.4, and 5.7 at 95, 99, and 99.9%tile, respectively. These percentile values, defined as "reliability," represent the cumulative probability of NoV GII concentration distribution in untreated wastewater below the corresponding tolerable annual disease burden after wastewater reclamation. An approximate 1-log 10 difference of LR target values was observed between 10 -4 and 10 -6 DALY pppy . The LR target values were influenced mostly by the change in the logarithmic standard deviation (SD) values of NoV GII concentration in untreated wastewater and the reliability values, which highlights the importance of accurately determining the probabilistic distribution of reference virus concentrations in source water for water reuse. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Stationary Temperature Distribution in a Rotating Ring-Shaped Target
NASA Astrophysics Data System (ADS)
Kazarinov, N. Yu.; Gulbekyan, G. G.; Kazacha, V. I.
2018-05-01
For a rotating ring-shaped target irradiated by a heavy-ion beam, a differential equation for computing the stationary distribution of the temperature averaged over the cross section is derived. The ion-beam diameter is assumed to be equal to the ring width. Solving this equation allows one to obtain the stationary temperature distribution along the ring-shaped target depending on the ion-beam, target, and cooling-gas parameters. Predictions are obtained for the rotating target to be installed at the DC-280 cyclotron. For an existing rotating target irradiated by an ion beam, our predictions are compared with the measured temperature distribution.
Information-Based Analysis of Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.
2013-12-01
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less
A framework for global river flood risk assessment
NASA Astrophysics Data System (ADS)
Winsemius, H. C.; Van Beek, L. P. H.; Bouwman, A.; Ward, P. J.; Jongman, B.
2012-04-01
There is an increasing need for strategic global assessments of flood risks. Such assessments may be required by: (a) International Financing Institutes and Disaster Management Agencies to evaluate where, when, and which investments in flood risk mitigation are most required; (b) (re-)insurers, who need to determine their required coverage capital; and (c) large companies to account for risks of regional investments. In this contribution, we propose a framework for global river flood risk assessment. The framework combines coarse scale resolution hazard probability distributions, derived from global hydrological model runs (typical scale about 0.5 degree resolution) with high resolution estimates of exposure indicators. The high resolution is required because floods typically occur at a much smaller scale than the typical resolution of global hydrological models, and exposure indicators such as population, land use and economic value generally are strongly variable in space and time. The framework therefore estimates hazard at a high resolution ( 1 km2) by using a) global forcing data sets of the current (or in scenario mode, future) climate; b) a global hydrological model; c) a global flood routing model, and d) importantly, a flood spatial downscaling routine. This results in probability distributions of annual flood extremes as an indicator of flood hazard, at the appropriate resolution. A second component of the framework combines the hazard probability distribution with classical flood impact models (e.g. damage, affected GDP, affected population) to establish indicators for flood risk. The framework can be applied with a large number of datasets and models and sensitivities of such choices can be evaluated by the user. The framework is applied using the global hydrological model PCR-GLOBWB, combined with a global flood routing model. Downscaling of the hazard probability distributions to 1 km2 resolution is performed with a new downscaling algorithm, applied on a number of target regions. We demonstrate the use of impact models in these regions based on global GDP, population, and land use maps. In this application, we show sensitivities of the estimated risks with regard to the use of different climate input datasets, decisions made in the downscaling algorithm, and different approaches to establish distributed estimates of GDP and asset exposure to flooding.
48 CFR 1852.216-84 - Estimated cost and incentive fee.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Provisions and Clauses 1852.216-84 Estimated cost and incentive fee. As prescribed in 1816.406-70(d), insert the following clause: Estimated Cost and Incentive Fee (OCT 1996) The target cost of this contract is $___. The target fee of this contract is $___. The total target cost and target fee as contemplated by the...
KuKanich, Butch; Papich, Mark; Huff, David; Stoskopf, Michael
2004-06-01
Amikacin, an aminoglycoside antimicrobial, was administered to a killer whale (Orcinus orca) and a beluga whale (Delphinapterus leucas) for the treatment of clinical signs consistent with gram-negative aerobic bacterial infections. Dosage regimens were designed to target a maximal plasma concentration 8-10 times the minimum inhibitory concentrations of the pathogen and to reduce the risk of aminoglycoside toxicity. Allometric analysis of published pharmacokinetic parameters in mature animals yielded a relationship for amikacin's volume of distribution, in milliliters, given by the equation Vd = 151.058(BW)1.043. An initial dose for amikacin was estimated by calculating the volume of distribution and targeted maximal concentration. With this information, dosage regimens for i.m. administration were designed for a killer whale and a beluga whale. Therapeutic drug monitoring was performed on each whale to assess the individual pharmacokinetic parameters. The elimination half-life (5.99 hr), volume of distribution per bioavailability (319 ml/kg). and clearance per bioavailability (0.61 ml/min/kg) were calculated for the killer whale. The elimination half-life (5.03 hr), volume of distribution per bioavailability (229 ml/kg). and clearance per bioavailability (0.53 ml/min/kg) were calculated for the beluga whale. The volume of distribution predicted from the allometric equation for both whales was similar to the calculated pharmacokinetic parameter. Both whales exhibited a prolonged elimination half-life and decreased clearance when compared with other animal species despite normal renal parameters on biochemistry panels. Allometric principles and therapeutic drug monitoring were used to accurately determine the doses in these cases and to avoid toxicity.
Further analysis of a snowfall enhancement project in the Snowy Mountains of Australia
NASA Astrophysics Data System (ADS)
Manton, Michael J.; Peace, Andrew D.; Kemsley, Karen; Kenyon, Suzanne; Speirs, Johanna C.; Warren, Loredana; Denholm, John
2017-09-01
The first phase of the Snowy Precipitation Enhancement Research Project (SPERP-1) was a confirmatory experiment on winter orographic cloud seeding (Manton et al., 2011). Analysis of the data (Manton and Warren, 2011) found that a statistically significant impact of seeding could be obtained by removing any 5-hour experimental units (EUs) for which the amount of released seeding material was below a specified minimum. Analysis of the SPERP-1 data is extended in the present work by first considering the uncertainties in the measurement of precipitation and in the methodology. It is found that the estimation of the natural precipitation in the target area, based solely on the precipitation in the designated control area, is a significant source of uncertainty. A systematic search for optimal predictors shows that both the Froude number of the low-level flow across the mountains and the control precipitation should be used to estimate the natural precipitation. Applying the optimal predictors for the natural precipitation, statistically significant impacts are found using all EUs. This approach also supports a novel analysis of the sensitivity of seeding impacts to environmental variables, such as wind speed and cloud top temperature. The spatial distribution of seeding impact across the target is investigated. Building on the results of SPERP-1, phase 2 of the experiment (SPERP-2) ran from 2010 to 2013 with the target area extended to the north along the mountain ridges. Using the revised methodology, the seeding impacts in SPERP-2 are found to be consistent with those in SPERP-1, provided that the natural precipitation is estimated accurately.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Measuring household consumption and waste in unmetered, intermittent piped water systems
NASA Astrophysics Data System (ADS)
Kumpel, Emily; Woelfle-Erskine, Cleo; Ray, Isha; Nelson, Kara L.
2017-01-01
Measurements of household water consumption are extremely difficult in intermittent water supply (IWS) regimes in low- and middle-income countries, where water is delivered for short durations, taps are shared, metering is limited, and household storage infrastructure varies widely. Nonetheless, consumption estimates are necessary for utilities to improve water delivery. We estimated household water use in Hubli-Dharwad, India, with a mixed-methods approach combining (limited) metered data, storage container inventories, and structured observations. We developed a typology of household water access according to infrastructure conditions based on the presence of an overhead storage tank and a shared tap. For households with overhead tanks, container measurements and metered data produced statistically similar consumption volumes; for households without overhead tanks, stored volumes underestimated consumption because of significant water use directly from the tap during delivery periods. Households that shared taps consumed much less water than those that did not. We used our water use calculations to estimate waste at the household level and in the distribution system. Very few households used 135 L/person/d, the Government of India design standard for urban systems. Most wasted little water even when unmetered, however, unaccounted-for water in the neighborhood distribution systems was around 50%. Thus, conservation efforts should target loss reduction in the network rather than at households.
Policy implications of uncertainty in modeled life-cycle greenhouse gas emissions of biofuels.
Mullins, Kimberley A; Griffin, W Michael; Matthews, H Scott
2011-01-01
Biofuels have received legislative support recently in California's Low-Carbon Fuel Standard and the Federal Energy Independence and Security Act. Both present new fuel types, but neither provides methodological guidelines for dealing with the inherent uncertainty in evaluating their potential life-cycle greenhouse gas emissions. Emissions reductions are based on point estimates only. This work demonstrates the use of Monte Carlo simulation to estimate life-cycle emissions distributions from ethanol and butanol from corn or switchgrass. Life-cycle emissions distributions for each feedstock and fuel pairing modeled span an order of magnitude or more. Using a streamlined life-cycle assessment, corn ethanol emissions range from 50 to 250 g CO(2)e/MJ, for example, and each feedstock-fuel pathway studied shows some probability of greater emissions than a distribution for gasoline. Potential GHG emissions reductions from displacing fossil fuels with biofuels are difficult to forecast given this high degree of uncertainty in life-cycle emissions. This uncertainty is driven by the importance and uncertainty of indirect land use change emissions. Incorporating uncertainty in the decision making process can illuminate the risks of policy failure (e.g., increased emissions), and a calculated risk of failure due to uncertainty can be used to inform more appropriate reduction targets in future biofuel policies.
On the internal target model in a tracking task
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Baron, S.
1981-01-01
An optimal control model for predicting operator's dynamic responses and errors in target tracking ability is summarized. The model, which predicts asymmetry in the tracking data, is dependent on target maneuvers and trajectories. Gunners perception, decision making, control, and estimate of target positions and velocity related to crossover intervals are discussed. The model provides estimates for means, standard deviations, and variances for variables investigated and for operator estimates of future target positions and velocities.
Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting
2018-03-18
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.
Assessing the causal effect of policies: an example using stochastic interventions.
Díaz, Iván; van der Laan, Mark J
2013-11-19
Assessing the causal effect of an exposure often involves the definition of counterfactual outcomes in a hypothetical world in which the stochastic nature of the exposure is modified. Although stochastic interventions are a powerful tool to measure the causal effect of a realistic intervention that intends to alter the population distribution of an exposure, their importance to answer questions about plausible policy interventions has been obscured by the generalized use of deterministic interventions. In this article, we follow the approach described in Díaz and van der Laan (2012) to define and estimate the effect of an intervention that is expected to cause a truncation in the population distribution of the exposure. The observed data parameter that identifies the causal parameter of interest is established, as well as its efficient influence function under the non-parametric model. Inverse probability of treatment weighted (IPTW), augmented IPTW and targeted minimum loss-based estimators (TMLE) are proposed, their consistency and efficiency properties are determined. An extension to longitudinal data structures is presented and its use is demonstrated with a real data example.
Wakeley, Heather L; Hendrickson, Chris T; Griffin, W Michael; Matthews, H Scott
2009-04-01
The combination of current and planned 2007 U.S. ethanol production capacity is 50 billion L/yr, one-third of the Energy Independence and Security Act of 2007 (EISA) target of 136 billion L of biofuels by 2022. In this study, we evaluate transportation impacts and infrastructure requirements for the use of E85 (85% ethanol, 15% gasoline) in light-duty vehicles using a combination of corn and cellulosic ethanol. Ethanol distribution is modeled using a linear optimization model. Estimated average delivered ethanol costs, in 2005 dollars, range from $0.29 to $0.62 per liter ($1.3-2.8 per gallon), depending on transportation distance and mode. Emissions from ethanol transport estimated in this work are up to 2 times those in previous ethanol LCA studies and thus lead to larger total life cycle effects. Long-distance transport of ethanol to the end user can negate ethanol's potential economic and environmental benefits relative to gasoline. To reduce costs, we recommend regional concentration of E85 blends for future ethanol production and use.
Hazardous air pollutants in industrial area of Mumbai - India.
Srivastava, Anjali; Som, Dipanjali
2007-09-01
Hazardous Air Pollutants (HAPs) have a potential to be distributed into different component of environment with varying persistence. In the current study fourteen HAPs have been quantified in the air using TO-17 method in an industrial area of Mumbai. The distribution of these HAPs in different environmental compartments have been calculated using multi media mass balance model, TaPL3, along with long range transport potential and persistence. Results show that most of the target compounds partition mostly in air. Phenol and trifluralin, partition predominantly into soil while ethyl benzene and xylene partition predominantly into vegetation compartment. Naphthalene has the highest persistence followed by ethyl benzene, xylene and 1,1,1 trihloro ethane. Long range transport potential is maximum for 1,1,1 trichloroethane. Assessment of human health risk in terms of non-carcinogenic hazard and carcinogenic risk due to exposure to HAPs. have been estimated for industrial workers and residents in the study area considering all possible exposure routes using the output from TaPL3 model. The overall carcinogenic risk for residents and workers are estimated as high as unity along with very high hazard potential.
MASS ESTIMATES OF RAPIDLY MOVING PROMINENCE MATERIAL FROM HIGH-CADENCE EUV IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, David R.; Baker, Deborah; Van Driel-Gesztelyi, Lidia, E-mail: d.r.williams@ucl.ac.uk
We present a new method for determining the column density of erupting filament material using state-of-the-art multi-wavelength imaging data. Much of the prior work on filament/prominence structure can be divided between studies that use a polychromatic approach with targeted campaign observations and those that use synoptic observations, frequently in only one or two wavelengths. The superior time resolution, sensitivity, and near-synchronicity of data from the Solar Dynamics Observatory's Advanced Imaging Assembly allow us to combine these two techniques using photoionization continuum opacity to determine the spatial distribution of hydrogen in filament material. We apply the combined techniques to SDO/AIA observationsmore » of a filament that erupted during the spectacular coronal mass ejection on 2011 June 7. The resulting 'polychromatic opacity imaging' method offers a powerful way to track partially ionized gas as it erupts through the solar atmosphere on a regular basis, without the need for coordinated observations, thereby readily offering regular, realistic mass-distribution estimates for models of these erupting structures.« less
Loading estimates of lead, copper, cadmium, and zinc in urban runoff from specific sources.
Davis, A P; Shokouhian, M; Ni, S
2001-08-01
Urban stormwater runoff is being recognized as a substantial source of pollutants to receiving waters. A number of investigators have found significant levels of metals in runoff from urban areas, especially in highway runoff. As an initiatory study, this work estimates lead, copper, cadmium, and zinc loadings from various sources in a developed area utilizing information available in the literature, in conjunction with controlled experimental and sampling investigations. Specific sources examined include building siding and roofs; automobile brakes, tires, and oil leakage; and wet and dry atmospheric deposition. Important sources identified are building siding for all four metals, vehicle brake emissions for copper and tire wear for zinc. Atmospheric deposition is an important source for cadmium, copper, and lead. Loadings and source distributions depend on building and automobile density assumptions and the type of materials present in the area examined. Identified important sources are targeted for future comprehensive mechanistic studies. Improved information on the metal release and distributions from the specific sources, along with detailed characterization of watershed areas will allow refinements in the predictions.
Radiance and atmosphere propagation-based method for the target range estimation
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan
2012-06-01
Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.
Gamma-H2AX-based dose estimation for whole and partial body radiation exposure.
Horn, Simon; Barnard, Stephen; Rothkamm, Kai
2011-01-01
Most human exposures to ionising radiation are partial body exposures. However, to date only limited tools are available for rapid and accurate estimation of the dose distribution and the extent of the body spared from the exposure. These parameters are of great importance for emergency triage and clinical management of exposed individuals. Here, measurements of γ-H2AX immunofluorescence by microscopy and flow cytometry were compared as rapid biodosimetric tools for whole and partial body exposures. Ex vivo uniformly X-irradiated blood lymphocytes from one donor were used to generate a universal biexponential calibration function for γ-H2AX foci/intensity yields per unit dose for time points up to 96 hours post exposure. Foci--but not intensity--levels remained significantly above background for 96 hours for doses of 0.5 Gy or more. Foci-based dose estimates for ex vivo X-irradiated blood samples from 13 volunteers were in excellent agreement with the actual dose delivered to the targeted samples. Flow cytometric dose estimates for X-irradiated blood samples from 8 volunteers were in excellent agreement with the actual dose delivered at 1 hour post exposure but less so at 24 hours post exposure. In partial body exposures, simulated by mixing ex vivo irradiated and unirradiated lymphocytes, foci/intensity distributions were significantly over-dispersed compared to uniformly irradiated lymphocytes. For both methods and in all cases the estimated fraction of irradiated lymphocytes and dose to that fraction, calculated using the zero contaminated Poisson test and γ-H2AX calibration function, were in good agreement with the actual mixing ratios and doses delivered to the samples. In conclusion, γ-H2AX analysis of irradiated lymphocytes enables rapid and accurate assessment of whole body doses while dispersion analysis of foci or intensity distributions helps determine partial body doses and the irradiated fraction size in cases of partial body exposures.
Astrochemical Properties of Planck Cold Clumps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tatematsu, Ken’ichi; Sanhueza, Patricio; Nguyễn Lu’o’ng, Quang
We observed 13 Planck cold clumps with the James Clerk Maxwell Telescope/SCUBA-2 and with the Nobeyama 45 m radio telescope. The N{sub 2}H{sup +} distribution obtained with the Nobeyama telescope is quite similar to SCUBA-2 dust distribution. The 82 GHz HC{sub 3}N, 82 GHz CCS, and 94 GHz CCS emission are often distributed differently with respect to the N{sub 2}H{sup +} emission. The CCS emission, which is known to be abundant in starless molecular cloud cores, is often very clumpy in the observed targets. We made deep single-pointing observations in DNC, HN{sup 13}C, N{sub 2}D{sup +}, and cyclic-C{sub 3}H{sub 2}more » toward nine clumps. The detection rate of N{sub 2}D{sup +} is 50%. Furthermore, we observed the NH{sub 3} emission toward 15 Planck cold clumps to estimate the kinetic temperature, and confirmed that most targets are cold (≲20 K). In two of the starless clumps we observed, the CCS emission is distributed as it surrounds the N{sub 2}H{sup +} core (chemically evolved gas), which resembles the case of L1544, a prestellar core showing collapse. In addition, we detected both DNC and N{sub 2}D{sup +}. These two clumps are most likely on the verge of star formation. We introduce the chemical evolution factor (CEF) for starless cores to describe the chemical evolutionary stage, and analyze the observed Planck cold clumps.« less
NASA Astrophysics Data System (ADS)
M, Adimurthy; Katti, Vadiraj V.
2017-02-01
Local distribution of wall static pressure and heat transfer on a smooth flat plate impinged by a normal slot air jet is experimental investigated. Present study focuses on the influence of jet-to-plate spacing ( Z/D h ) (0.5-10) and Reynolds number (2500-20,000) on the fluid flow and heat transfer distribution. A single slot jet with an aspect ratio ( l/b) of about 22 is chosen for the current study. Infrared Thermal Imaging technique is used to capture the temperature data on the target surface. Local heat transfer coefficients are estimated from the thermal images using `SMART VIEW' software. Wall static pressure measurement is carried out for the specified range of Re and Z/D h . Wall static pressure coefficients are seen to be independent of Re in the range between 5000 and 15,000 for a given Z/D h . Nu values are higher at the stagnation point for all Z/D h and Re investigated. For lower Z/D h and higher Re, secondary peaks are observed in the heat transfer distributions. This may be attributed to fluid translating from laminar to turbulent flow on the target plate. Heat transfer characteristics are explained based on the simplified flow assumptions and the pressure data obtained using Differential pressure transducer and static pressure probe. Semi-empirical correlation for the Nusselt number in the stagnation region is proposed.
Estimating Elevation Angles From SAR Crosstalk
NASA Technical Reports Server (NTRS)
Freeman, Anthony
1994-01-01
Scheme for processing polarimetric synthetic-aperture-radar (SAR) image data yields estimates of elevation angles along radar beam to target resolution cells. By use of estimated elevation angles, measured distances along radar beam to targets (slant ranges), and measured altitude of aircraft carrying SAR equipment, one can estimate height of target terrain in each resolution cell. Monopulselike scheme yields low-resolution topographical data.
Target Information Processing: A Joint Decision and Estimation Approach
2012-03-29
ground targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important...targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important
Detection and imaging of moving objects with SAR by a joint space-time-frequency processing
NASA Astrophysics Data System (ADS)
Barbarossa, Sergio; Farina, Alfonso
This paper proposes a joint spacetime-frequency processing scheme for the detection and imaging of moving targets by Synthetic Aperture Radars (SAR). The method is based on the availability of an array antenna. The signals received by the array elements are combined, in a spacetime processor, to cancel the clutter. Then, they are analyzed in the time-frequency domain, by computing their Wigner-Ville Distribution (WVD), in order to estimate the instantaneous frequency, to be used for the successive phase compensation, necessary to produce a high resolution image.
Quantum partial search for uneven distribution of multiple target items
NASA Astrophysics Data System (ADS)
Zhang, Kun; Korepin, Vladimir
2018-06-01
Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.
Generic framework for vessel detection and tracking based on distributed marine radar image data
NASA Astrophysics Data System (ADS)
Siegert, Gregor; Hoth, Julian; Banyś, Paweł; Heymann, Frank
2018-04-01
Situation awareness is understood as a key requirement for safe and secure shipping at sea. The primary sensor for maritime situation assessment is still the radar, with the AIS being introduced as supplemental service only. In this article, we present a framework to assess the current situation picture based on marine radar image processing. Essentially, the framework comprises a centralized IMM-JPDA multi-target tracker in combination with a fully automated scheme for track management, i.e., target acquisition and track depletion. This tracker is conditioned on measurements extracted from radar images. To gain a more robust and complete situation picture, we are exploiting the aspect angle diversity of multiple marine radars, by fusing them a priori to the tracking process. Due to the generic structure of the proposed framework, different techniques for radar image processing can be implemented and compared, namely the BLOB detector and SExtractor. The overall framework performance in terms of multi-target state estimation will be compared for both methods based on a dedicated measurement campaign in the Baltic Sea with multiple static and mobile targets given.
Yang, Chia-Chun; Andrews, Erik H; Chen, Min-Hsuan; Wang, Wan-Yu; Chen, Jeremy J W; Gerstein, Mark; Liu, Chun-Chi; Cheng, Chao
2016-08-12
Chromatin immunoprecipitation followed by massively parallel DNA sequencing (ChIP-seq) or microarray hybridization (ChIP-chip) has been widely used to determine the genomic occupation of transcription factors (TFs). We have previously developed a probabilistic method, called TIP (Target Identification from Profiles), to identify TF target genes using ChIP-seq/ChIP-chip data. To achieve high specificity, TIP applies a conservative method to estimate significance of target genes, with the trade-off being a relatively low sensitivity of target gene identification compared to other methods. Additionally, TIP's output does not render binding-peak locations or intensity, information highly useful for visualization and general experimental biological use, while the variability of ChIP-seq/ChIP-chip file formats has made input into TIP more difficult than desired. To improve upon these facets, here we present are fined TIP with key extensions. First, it implements a Gaussian mixture model for p-value estimation, increasing target gene identification sensitivity and more accurately capturing the shape of TF binding profile distributions. Second, it enables the incorporation of TF binding-peak data by identifying their locations in significant target gene promoter regions and quantifies their strengths. Finally, for full ease of implementation we have incorporated it into a web server ( http://syslab3.nchu.edu.tw/iTAR/ ) that enables flexibility of input file format, can be used across multiple species and genome assembly versions, and is freely available for public use. The web server additionally performs GO enrichment analysis for the identified target genes to reveal the potential function of the corresponding TF. The iTAR web server provides a user-friendly interface and supports target gene identification in seven species, ranging from yeast to human. To facilitate investigating the quality of ChIP-seq/ChIP-chip data, the web server generates the chart of the characteristic binding profiles and the density plot of normalized regulatory scores. The iTAR web server is a useful tool in identifying TF target genes from ChIP-seq/ChIP-chip data and discovering biological insights.
Dutoit, Ludovic; Burri, Reto; Nater, Alexander; Mugal, Carina F; Ellegren, Hans
2017-07-01
Properly estimating genetic diversity in populations of nonmodel species requires a basic understanding of how diversity is distributed across the genome and among individuals. To this end, we analysed whole-genome resequencing data from 20 collared flycatchers (genome size ≈1.1 Gb; 10.13 million single nucleotide polymorphisms detected). Genomewide nucleotide diversity was almost identical among individuals (mean = 0.00394, range = 0.00384-0.00401), but diversity levels varied extensively across the genome (95% confidence interval for 200-kb windows = 0.0013-0.0053). Diversity was related to selective constraint such that in comparison with intergenic DNA, diversity at fourfold degenerate sites was reduced to 85%, 3' UTRs to 82%, 5' UTRs to 70% and nondegenerate sites to 12%. There was a strong positive correlation between diversity and chromosome size, probably driven by a higher density of targets for selection on smaller chromosomes increasing the diversity-reducing effect of linked selection. Simulations exploring the ability of sequence data from a small number of genetic markers to capture the observed diversity clearly demonstrated that diversity estimation from finite sampling of such data is bound to be associated with large confidence intervals. Nevertheless, we show that precision in diversity estimation in large outbred population benefits from increasing the number of loci rather than the number of individuals. Simulations mimicking RAD sequencing showed that this approach gives accurate estimates of genomewide diversity. Based on the patterns of observed diversity and the performed simulations, we provide broad recommendations for how genetic diversity should be estimated in natural populations. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.
Performance of Distributed CFAR Processors in Pearson Distributed Clutter
NASA Astrophysics Data System (ADS)
Messali, Zoubeida; Soltani, Faouzi
2006-12-01
This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO) and the smallest of (SO) CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.
48 CFR 1852.216-84 - Estimated cost and incentive fee.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the following clause: Estimated Cost and Incentive Fee (OCT 1996) The target cost of this contract is $___. The target fee of this contract is $___. The total target cost and target fee as contemplated by the...
Neutronics performance and activation calculation of dense tungsten granular target for China-ADS
NASA Astrophysics Data System (ADS)
Zhang, Yaling; Li, Jianyang; Zhang, Xunchao; Cai, Hanjie; Yan, Xuesong; Yu, Lin; Fu, Fen; Lin, Ping; Gao, Xiaofei; Zhang, Zhilei; Zhang, Yanshi; Yang, Lei
2017-11-01
Spallation target, which constitutes the physical and functional interface between the high power accelerator and the subcritical core, is one of the most important components in Accelerator Driven Subcritical System (ADS). In this paper, we investigated the neutronics performance, the radiation damage and the activation of dense tungsten granular flow spallation target by using the Monte Carlo programs GMT and FLUKA at the proton energy of 250 MeV with a beam current of 10 mA . First, the leaking neutron yield, leaking neutron energy spectrum and laterally leaking neutron distribution at several time nodes and with different target parameters are explored. After that, the displacement per atom (DPA) and the helium/hydrogen production for tungsten grains and structural materials with stainless steel 316L are estimated. Finally, the radioactivity, residual dose rate and afterheat of granular target are presented. Results indicate that granule diameter below 1 cm and the beam profile diameter have negligible impact on neutronics performance, while the target diameter and volume fraction of grain have notable influence. The maximum DPA for target vessel (beam tube) is about 1.0 (1.6) DPA/year in bare target, and increased to 2.6 (2.8) DPA/year in fission environment. Average DPA for tungsten grains is relatively low. The decline rate of radioactivity and afterheat with cooling time grows with the decrease of the irradiation time.
Katz, Itamar; Komatsu, Ryuichi; Low-Beer, Daniel; Atun, Rifat
2011-02-23
The paper projects the contribution to 2011-2015 international targets of three major pandemics by programs in 140 countries funded by the Global Fund to Fight AIDS, Tuberculosis and Malaria, the largest external financier of tuberculosis and malaria programs and a major external funder of HIV programs in low and middle income countries. Estimates, using past trends, for the period 2011-2015 of the number of persons receiving antiretroviral (ARV) treatment, tuberculosis case detection using the internationally approved DOTS strategy, and insecticide-treated nets (ITNs) to be delivered by programs in low and middle income countries supported by the Global Fund compared to international targets established by UNAIDS, Stop TB Partnership, Roll Back Malaria Partnership and the World Health Organisation. Global Fund-supported programs are projected to provide ARV treatment to 5.5-5.8 million people, providing 30%-31% of the 2015 international target. Investments in tuberculosis and malaria control will enable reaching in 2015 60%-63% of the international target for tuberculosis case detection and 30%-35% of the ITN distribution target in sub-Saharan Africa. Global Fund investments will substantially contribute to the achievement by 2015 of international targets for HIV, TB and malaria. However, additional large scale international and domestic financing is needed if these targets are to be reached by 2015.
Palache, Abraham; Oriol-Mathieu, Valerie; Fino, Mireli; Xydia-Charmanta, Margarita
2015-10-13
Seasonal influenza is an important disease which results in 250,000-500,000 annual deaths worldwide. Global targets for vaccination coverage rates (VCRs) in high-risk groups are at least 75% in adults ≥65 years and increased coverage in other risk groups. The International Federation of Pharmaceutical Manufacturers and Associations Influenza Vaccine Supply (IFPMA IVS) International Task Force developed a survey methodology in 2008, to assess the global distribution of influenza vaccine doses as a proxy for VCRs. This paper updates the previous survey results on absolute numbers of influenza vaccine doses distributed between 2004 and 2013 inclusive, and dose distribution rates per 1000 population, and provides a qualitative assessment of the principal enablers and barriers to seasonal influenza vaccination. The two main findings from the quantitative portion of the survey are the continued negative trend for dose distribution in the EURO region and the perpetuation of appreciable differences in scale of dose distribution between WHO regions, with no observed convergence in the rates of doses distributed per 1000 population over time. The main findings from the qualitative portion of the survey were that actively managing the vaccination program in real-time and ensuring political commitment to vaccination are important enablers of vaccination, whereas insufficient access to vaccination and lack of political commitment to seasonal influenza vaccination programs are likely contributing to vaccination target failures. In all regions of the world, seasonal influenza vaccination is underutilized as a public health tool. The survey provides evidence of lost opportunity to protect populations against potentially serious influenza-associated disease. We call on the national and international public health communities to re-evaluate their political commitment to the prevention of the annual influenza disease burden and to develop a systematic approach to improve vaccine distribution equitably. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Electrostatics of DNA-Functionalized Nanoparticles
NASA Astrophysics Data System (ADS)
Hoffmann, Kyle; Krishnamoorthy, Kurinji; Kewalramani, Sumit; Bedzyk, Michael; Olvera de La Cruz, Monica
DNA-functionalized nanoparticles have applications in directed self-assembly and targeted cellular delivery of therapeutic proteins. In order to design specific systems, it is necessary to understand their self-assembly properties, of which the long-range electrostatic interactions are a critical component. We iteratively solved equations derived from classical density functional theory in order to predict the distribution of ions around DNA-functionalized Cg Catalase. We then compared estimates of the resonant intensity to those from SAXS measurements to estimate key features of DNA-functionalized proteins, such as the size of the region linking the protein and DNA and the extension of the single-stranded DNA. Using classical density functional theory and coarse-grained simulations, we are able to predict and understand these fundamental properties in order to rationally design new biomaterials.
A Structural Model of the Retail Market for Illicit Drugs.
Galenianos, Manolis; Gavazza, Alessandro
2017-03-01
We estimate a model of illicit drugs markets using data on purchases of crack cocaine. Buyers are searching for high-quality drugs, but they determine drugs' quality (i.e., their purity) only after consuming them. Hence, sellers can rip off first-time buyers or can offer higher-quality drugs to induce buyers to purchase from them again. In equilibrium, a distribution of qualities persists. The estimated model implies that if drugs were legalized, in which case purity could be regulated and hence observable, the average purity of drugs would increase by approximately 20 percent and the dispersion would decrease by approximately 80 percent. Moreover, increasing penalties may raise the purity and affordability of the drugs traded by increasing sellers’ relative profitability of targeting loyal buyers versus first-time buyers.
Autonomous intelligent assembly systems LDRD 105746 final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2013-04-01
This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less
An algorithm for targeting finite burn maneuvers
NASA Technical Reports Server (NTRS)
Barbieri, R. W.; Wyatt, G. H.
1972-01-01
An algorithm was developed to solve the following problem: given the characteristics of the engine to be used to make a finite burn maneuver and given the desired orbit, when must the engine be ignited and what must be the orientation of the thrust vector so as to obtain the desired orbit? The desired orbit is characterized by classical elements and functions of these elements whereas the control parameters are characterized by the time to initiate the maneuver and three direction cosines which locate the thrust vector. The algorithm was built with a Monte Carlo capability whereby samples are taken from the distribution of errors associated with the estimate of the state and from the distribution of errors associated with the engine to be used to make the maneuver.
Kozma, Robert; Wang, Lan; Iftekharuddin, Khan; McCracken, Ernest; Khan, Muhammad; Islam, Khandakar; Bhurtel, Sushil R; Demirer, R Murat
2012-01-01
The feasibility of using Commercial Off-The-Shelf (COTS) sensor nodes is studied in a distributed network, aiming at dynamic surveillance and tracking of ground targets. Data acquisition by low-cost (<$50 US) miniature low-power radar through a wireless mote is described. We demonstrate the detection, ranging and velocity estimation, classification and tracking capabilities of the mini-radar, and compare results to simulations and manual measurements. Furthermore, we supplement the radar output with other sensor modalities, such as acoustic and vibration sensors. This method provides innovative solutions for detecting, identifying, and tracking vehicles and dismounts over a wide area in noisy conditions. This study presents a step towards distributed intelligent decision support and demonstrates effectiveness of small cheap sensors, which can complement advanced technologies in certain real-life scenarios.
Incorporating Conservation Zone Effectiveness for Protecting Biodiversity in Marine Planning
Makino, Azusa; Klein, Carissa J.; Beger, Maria; Jupiter, Stacy D.; Possingham, Hugh P.
2013-01-01
Establishing different types of conservation zones is becoming commonplace. However, spatial prioritization methods that can accommodate multiple zones are poorly understood in theory and application. It is typically assumed that management regulations across zones have differential levels of effectiveness (“zone effectiveness”) for biodiversity protection, but the influence of zone effectiveness on achieving conservation targets has not yet been explored. Here, we consider the zone effectiveness of three zones: permanent closure, partial protection, and open, for planning for the protection of five different marine habitats in the Vatu-i-Ra Seascape, Fiji. We explore the impact of differential zone effectiveness on the location and costs of conservation priorities. We assume that permanent closure zones are fully effective at protecting all habitats, open zones do not contribute towards the conservation targets and partial protection zones lie between these two extremes. We use four different estimates for zone effectiveness and three different estimates for zone cost of the partial protection zone. To enhance the practical utility of the approach, we also explore how much of each traditional fishing ground can remain open for fishing while still achieving conservation targets. Our results show that all of the high priority areas for permanent closure zones would not be a high priority when the zone effectiveness of the partial protection zone is equal to that of permanent closure zones. When differential zone effectiveness and costs are considered, the resulting marine protected area network consequently increases in size, with more area allocated to permanent closure zones to meet conservation targets. By distributing the loss of fishing opportunity equitably among local communities, we find that 84–88% of each traditional fishing ground can be left open while still meeting conservation targets. Finally, we summarize the steps for developing marine zoning that accounts for zone effectiveness. PMID:24223870
Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A
2014-01-28
A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.
2018-01-01
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets. PMID:29562642
Regnault, Antoine; Hamel, Jean-François; Patrick, Donald L
2015-02-01
Cultural differences and/or poor linguistic validation of patient-reported outcome (PRO) instruments may result in differences in the assessment of the targeted concept across languages. In the context of multinational clinical trials, these measurement differences may add noise and potentially measurement bias to treatment effect estimation. Our objective was to explore the potential effect on treatment effect estimation of the "contamination" of a cultural subgroup by a flawed PRO measurement. We ran a simulation exercise in which the distribution of the score in the overall sample was considered a mixture of two normal distributions: a standard normal distribution was assumed in a "main" subgroup and a normal distribution which differed either in mean (bias) or in variance (noise) in a "contaminated" subgroup (the subgroup with potential flaws in the PRO measurement). The observed power was compared to the expected power (i.e., the power that would have been observed if the subgroup had not been contaminated). Even if differences between the expected and observed power were small, some substantial differences were obtained (up to a 0.375 point drop in power). No situation was systematically protected against loss of power. The impact of poor PRO measurement in a cultural subgroup may induce a notable drop in the study power and consequently reduce the chance of showing an actual treatment effect. These results illustrate the importance of the efforts to optimize conceptual and linguistic equivalence of PRO measures when pooling data in international clinical trials.
Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-07-01
What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Brian W.; Frost, Sophia; Frayo, Shani
Abstract Alpha emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm) causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with alpha emitters may inactivate targeted cells with minimal radiation damage to surrounding tissues. For accurate dosimetry in alpha-RIT, tools are needed to visualize and quantify the radioactivity distribution and absorbed dose to targeted and non-targeted cells, especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterizemore » a novel single-particle digital autoradiography imager, iQID (ionizing-radiation Quantum Imaging Detector), for use in alpha-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection technology that images and identifies charged-particle and gamma-ray/X-ray emissions spatially and temporally on an event-by-event basis. It employs recent advances in CCD/CMOS cameras and computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, we evaluated this system’s characteristics for alpha particle imaging including measurements of spatial resolution and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 (211At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ~20 μm full width at half maximum (FWHM) and the alpha particle background was measured at a rate of (2.6 ± 0.5) × 10–4 cpm/cm2 (40 mm diameter detector area). Simultaneous imaging of multiple tissue sections was performed using a large-area iQID configuration (ø 11.5 cm). Estimation of the 211At activity distribution was demonstrated at mBq/μg levels. Conclusion: Single-particle digital autoradiography of alpha emitters has advantages over traditional autoradiographic techniques in terms of spatial resolution, sensitivity, and activity quantification capability. The system features and characterization results presented in this study show that iQID is a promising technology for microdosimetry, because it provides necessary information for interpreting alpha-RIT outcomes and for predicting the therapeutic efficacy of cell-targeted approaches using alpha emitters.« less
Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kok, H. Petra, E-mail: H.P.Kok@amc.uva.nl; Crezee, Johannes; Franken, Nicolaas A.P.
2014-03-01
Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normalmore » tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different treatment modalities.« less
Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.
Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian
2016-01-20
This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.
NASA Technical Reports Server (NTRS)
Storrie-Lombardi, Michael C.; Hoover, Richard B.; Abbas, Mian; Jerman, Gregory; Coston, James; Fisk, Martin
2006-01-01
We have previously outlined a strategy for the detection of fossils [Storrie-Lombardi and Hoover, 2004] and extant microbial life [Storrie-Lombaudi and Hoover, 20051 during robotic missions to Mars using co-registered structural and chemical signatures. Data inputs included image lossless compression indices to estimate relative textural complexity and elemental abundance distributions. Two exploratory classification algorithms (principal component analysis and hierarchical cluster analysis) provide an initial tentative classification of all targets. Nonlinear stochastic neural networks are then trained to produce a Bayesian estimate of algorithm classification accuracy. The strategy previously has been successful in distinguishing regions of biotic and abiotic alteration of basalt glass from unaltered samples. [Storrie-Lombardi and Fisk, 2004; Storrie-Lombardi and Fisk, 2004] Such investigations of abiotic versus biotic alteration of terrestrial mineralogy on Earth are compromised by .the difficulty finding mineralogy completely unaffected by the ubiquitous presence of microbial life on the planet. The renewed interest in lunar exploration offers an opportunity to investigate geological materials that may exhibit signs of aqueous alteration, but are highly unlikely to contain contaminating biological weathering signatures. We here present an extension of our earlier data set to include lunar dust samples obtained during the Apollo 17 mission. Apollo 17 landed in the Taurus-Littrow Valley in Mare Serenitatis. Most of the rock samples from this region of the lunar highlands are basalts comprised primarily of plagioclase and pyroxene and selected examples of orange and black volcanic glass. SEM images and elemental abundances (C6, N7, O8, Na11, Mg12, Al13, Si14, P15, S16, Cll7, K19, Ca20, Fe26) for a series of targets in the lunar dust samples are compared to the extant cyanobacteria, fossil trilobites, Orgueil meteorite, and terrestrial basalt targets previously discussed. The data set provides a first step in producing a quantitative probabilistic methodology for geobiological analysis of returned lunar samples or in situ exploration.
MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples.
Sivaganesan, Mano; Siefring, Shawn; Varma, Manju; Haugland, Richard A
2011-12-01
DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from different studies by this approach, either a consistent source of calibrator cells must be used or the estimates must account for any differences in target sequence recoveries from different sources of calibrator cells. In this report we describe two methods for estimating target sequence recoveries from whole cell calibrator samples based on qPCR analyses of their serially diluted DNA extracts and most probable number (MPN) calculation. The first method employed a traditional MPN calculation approach. The second method employed a Bayesian hierarchical statistical modeling approach and a Monte Carlo Markov Chain (MCMC) simulation method to account for the uncertainty in these estimates associated with different individual samples of the cell preparations, different dilutions of the DNA extracts and different qPCR analytical runs. The two methods were applied to estimate mean target sequence recoveries per cell from two different lots of a commercially available source of enumerated Enterococcus cell preparations. The mean target sequence recovery estimates (and standard errors) per cell from Lot A and B cell preparations by the Bayesian method were 22.73 (3.4) and 11.76 (2.4), respectively, when the data were adjusted for potential false positive results. Means were similar for the traditional MPN approach which cannot comparably assess uncertainty in the estimates. Cell numbers and estimates of recoverable target sequences in calibrator samples prepared from the two cell sources were also used to estimate cell equivalent and target sequence quantities recovered from surface water samples in a comparative Ct method. Our results illustrate the utility of the Bayesian method in accounting for uncertainty, the high degree of precision attainable by the MPN approach and the need to account for the differences in target sequence recoveries from different calibrator sample cell sources when they are used in the comparative Ct method. Published by Elsevier B.V.
Experimental verification of an interpolation algorithm for improved estimates of animal position
NASA Astrophysics Data System (ADS)
Schell, Chad; Jaffe, Jules S.
2004-07-01
This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.
An Improved Aerial Target Localization Method with a Single Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2017-01-01
This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956
Target-strength Measurements of Sandfish Arctoscopus japonicus
NASA Astrophysics Data System (ADS)
Yoon, Eun-A.; Lee, Kyounghoon; Oh, Wooseok; Choi, Junghwa; Hwang, Kangseok; Kang, Myounghee
2018-03-01
The goal of this study was to estimate the target strength (TS) of the sandfish Arctoscopus japonicus using in-situ and ex-situ methods with an echosounder. For the in-situ TS measurement, the survey was conducted by taking hydroacoustic measurements at 38 and 120 kHz and using a coastal gill net, in Goseong, in the northeastern sea of Korea in early December 2009. Ex-situ measurement of TS used live specimens and the tethering method, and was conducted at 120 kHz. The distribution of fork length (FL) was bimodal: 14.6-19.8 cm (n = 241 individuals, mean = 17.0 cm) for males and 16.3-24.5 cm (n = 105 individuals, mean = 19.6 cm) for females. The in-situ TS ranged from -79.8 to -59.1 dB (mean = -74.3 dB for males and -64.1 dB for females) at 38 kHz and -79.9 to -56.2 dB (mean = -74.3 dB for males and -64.1 dB for females) at 120 kHz. The mean TS of females was approximately 10 dB higher than that of males at each dominant frequency. The female ex-situ TS values ranged from 68.5 to -54.6 dB, and those of males was from -67.7 to -59.3 dB. The mean TS value for females was 2.9 dB higher than that of males. These results may be used in echo-integration surveys of sandfish to estimate their abundance and seasonal distribution.
Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J
2009-01-01
Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial configuration matrices were then used to define expectations for prior distributions using a Markov chain Monte Carlo (MCMC) algorithm. A set of posterior means were defined in WinBUGS 1.4.3®. After the model had converged, samples from the conditional distributions were used to summarize the posterior distribution of the parameters. Thereafter, a spatial residual trend analyses was used to evaluate variance uncertainty propagation in the model using an autocovariance error matrix. Results By specifying coefficient estimates in a Bayesian framework, the covariate number of tillers was found to be a significant predictor, positively associated with An. arabiensis aquatic habitats. The spatial filter models accounted for approximately 19% redundant locational information in the ecological sampled An. arabiensis aquatic habitat data. In the residual error estimation model there was significant positive autocorrelation (i.e., clustering of habitats in geographic space) based on log-transformed larval/pupal data and the sampled covariate depth of habitat. Conclusion An autocorrelation error covariance matrix and a spatial filter analyses can prioritize mosquito control strategies by providing a computationally attractive and feasible description of variance uncertainty estimates for correctly identifying clusters of prolific An. arabiensis aquatic habitats based on larval/pupal productivity. PMID:19772590
The Performance Analysis Based on SAR Sample Covariance Matrix
Erten, Esra
2012-01-01
Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976
Mondlane, Gracinda; Ureba, Ana; Gubanski, Michael; Lind, Pehr A; Siegbahn, Albert
2018-05-01
Gastric cancer (GC) radiotherapy involves irradiation of large tumour volumes located in the proximities of critical structures. The advantageous dose distributions produced by scanned-proton beams could reduce the irradiated volumes of the organs at risk (OARs). However, treatment-induced side-effects may still appear. The aim of this study was to estimate the normal tissue complication probability (NTCP) following proton therapy of GC, compared to photon radiotherapy. Eight GC patients, previously treated with volumetric-modulated arc therapy (VMAT), were retrospectively planned with scanned proton beams carried out with the single-field uniform-dose (SFUD) method. A beam-specific planning target volume was used for spot positioning and a clinical target volume (CTV) based robust optimisation was performed considering setup- and range-uncertainties. The dosimetric and NTCP values obtained with the VMAT and SFUD plans were compared. With SFUD, lower or similar dose-volume values were obtained for OARs, compared to VMAT. NTCP values of 0% were determined with the VMAT and SFUD plans for all OARs (p>0.05), except for the left kidney (p<0.05), for which lower toxicity was estimated with SFUD. The NTCP reduction, determined for the left kidney with SFUD, can be of clinical relevance for preserving renal function after radiotherapy of GC. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Fontanesi, Luca; Bertolini, Francesca; Scotti, Emilio; Schiavo, Giuseppina; Colombo, Michela; Trevisi, Paolo; Ribani, Anisa; Buttazzoni, Luca; Russo, Vincenzo; Dall'Olio, Stefania
2015-01-01
The GPR120 gene (also known as FFAR4 or O3FAR1) encodes for a functional omega-3 fatty acid receptor/sensor that mediates potent insulin sensitizing effects by repressing macrophage-induced tissue inflammation. For its functional role, GPR120 could be considered a potential target gene in animal nutrigenetics. In this work we resequenced the porcine GPR120 gene by high throughput Ion Torrent semiconductor sequencing of amplified fragments obtained from 8 DNA pools derived, on the whole, from 153 pigs of different breeds/populations (two Italian Large White pools, Italian Duroc, Italian Landrace, Casertana, Pietrain, Meishan, and wild boars). Three single nucleotide polymorphisms (SNPs), two synonymous substitutions and one in the putative 3'-untranslated region (g.114765469C > T), were identified and their allele frequencies were estimated by sequencing reads count. The g.114765469C > T SNP was also genotyped by PCR-RFLP confirming estimated frequency in Italian Large White pools. Then, this SNP was analyzed in two Italian Large White cohorts using a selective genotyping approach based on extreme and divergent pigs for back fat thickness (BFT) estimated breeding value (EBV) and average daily gain (ADG) EBV. Significant differences of allele and genotype frequencies distribution was observed between the extreme ADG-EBV groups (P < 0.001) whereas this marker was not associated with BFT-EBV.
Planning spatial sampling of the soil from an uncertain reconnaissance variogram
NASA Astrophysics Data System (ADS)
Lark, R. Murray; Hamilton, Elliott M.; Kaninga, Belinda; Maseka, Kakoma K.; Mutondo, Moola; Sakala, Godfrey M.; Watts, Michael J.
2017-12-01
An estimated variogram of a soil property can be used to support a rational choice of sampling intensity for geostatistical mapping. However, it is known that estimated variograms are subject to uncertainty. In this paper we address two practical questions. First, how can we make a robust decision on sampling intensity, given the uncertainty in the variogram? Second, what are the costs incurred in terms of oversampling because of uncertainty in the variogram model used to plan sampling? To achieve this we show how samples of the posterior distribution of variogram parameters, from a computational Bayesian analysis, can be used to characterize the effects of variogram parameter uncertainty on sampling decisions. We show how one can select a sample intensity so that a target value of the kriging variance is not exceeded with some specified probability. This will lead to oversampling, relative to the sampling intensity that would be specified if there were no uncertainty in the variogram parameters. One can estimate the magnitude of this oversampling by treating the tolerable grid spacing for the final sample as a random variable, given the target kriging variance and the posterior sample values. We illustrate these concepts with some data on total uranium content in a relatively sparse sample of soil from agricultural land near mine tailings in the Copperbelt Province of Zambia.
Ha, Hojin; Hwang, Dongha; Kim, Guk Bae; Kweon, Jihoon; Lee, Sang Joon; Baek, Jehyun; Kim, Young-Hak; Kim, Namkug; Yang, Dong Hyun
2016-07-01
Quantifying turbulence velocity fluctuation is important because it indicates the fluid energy dissipation of the blood flow, which is closely related to the pressure drop along the blood vessel. This study aims to evaluate the effects of scan parameters and the target vessel size of 4D phase-contrast (PC)-MRI on quantification of turbulent kinetic energy (TKE). Comprehensive 4D PC-MRI measurements with various velocity-encoding (VENC), echo time (TE), and voxel size values were carried out to estimate TKE distribution in stenotic flow. The total TKE (TKEsum), maximum TKE (TKEmax), and background noise level (TKEnoise) were compared for each scan parameter. The feasibility of TKE estimation in small vessels was also investigated. Results show that the optimum VENC for stenotic flow with a peak velocity of 125cm/s was 70cm/s. Higher VENC values overestimated the TKEsum by up to six-fold due to increased TKEnoise, whereas lower VENC values (30cm/s) underestimated it by 57.1%. TE and voxel size did not significantly influence the TKEsum and TKEnoise, although the TKEmax significantly increased as the voxel size increased. TKE quantification in small-sized vessels (3-5-mm diameter) was feasible unless high-velocity turbulence caused severe phase dispersion in the reference image. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ueta, T.; Ladjal, D.; Exter, K. M.; Otsuka, M.; Szczerba, R.; Siódmiak, N.; Aleman, I.; van Hoof, P. A. M.; Kastner, J. H.; Montez, R.; McDonald, I.; Wittkowski, M.; Sandin, C.; Ramstedt, S.; De Marco, O.; Villaver, E.; Chu, Y.-H.; Vlemmings, W.; Izumiura, H.; Sahai, R.; Lopez, J. A.; Balick, B.; Zijlstra, A.; Tielens, A. G. G. M.; Rattray, R. E.; Behar, E.; Blackman, E. G.; Hebden, K.; Hora, J. L.; Murakawa, K.; Nordhaus, J.; Nordon, R.; Yamamura, I.
2014-05-01
Context. This is the first of a series of investigations into far-IR characteristics of 11 planetary nebulae (PNe) under the Herschel Space Observatory open time 1 program, Herschel Planetary Nebula Survey (HerPlaNS). Aims: Using the HerPlaNS data set, we look into the PN energetics and variations of the physical conditions within the target nebulae. In the present work, we provide an overview of the survey, data acquisition and processing, and resulting data products. Methods: We performed (1) PACS/SPIRE broadband imaging to determine the spatial distribution of the cold dust component in the target PNe and (2) PACS/SPIRE spectral-energy-distribution and line spectroscopy to determine the spatial distribution of the gas component in the target PNe. Results: For the case of NGC 6781, the broadband maps confirm the nearly pole-on barrel structure of the amorphous carbon-rich dust shell and the surrounding halo having temperatures of 26-40 K. The PACS/SPIRE multiposition spectra show spatial variations of far-IR lines that reflect the physical stratification of the nebula. We demonstrate that spatially resolved far-IR line diagnostics yield the (Te, ne) profiles, from which distributions of ionized, atomic, and molecular gases can be determined. Direct comparison of the dust and gas column mass maps constrained by the HerPlaNS data allows to construct an empirical gas-to-dust mass ratio map, which shows a range of ratios with the median of 195 ± 110. The present analysis yields estimates of the total mass of the shell to be 0.86 M⊙, consisting of 0.54 M⊙ of ionized gas, 0.12 M⊙ of atomic gas, 0.2 M⊙ of molecular gas, and 4 × 10-3 M⊙ of dust grains. These estimates also suggest that the central star of about 1.5 M⊙ initial mass is terminating its PN evolution onto the white dwarf cooling track. Conclusions: The HerPlaNS data provide various diagnostics for both the dust and gas components in a spatially resolved manner. In the forthcoming papers of the HerPlaNS series we will explore the HerPlaNS data set fully for the entire sample of 11 PNe. Herschel is an ESA Space Observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Table 2 and appendices are available in electronic form at http://www.aanda.org
Spatial frequency performance limitations of radiation dose optimization and beam positioning
NASA Astrophysics Data System (ADS)
Stewart, James M. P.; Stapleton, Shawn; Chaudary, Naz; Lindsay, Patricia E.; Jaffray, David A.
2018-06-01
The flexibility and sophistication of modern radiotherapy treatment planning and delivery methods have advanced techniques to improve the therapeutic ratio. Contemporary dose optimization and calculation algorithms facilitate radiotherapy plans which closely conform the three-dimensional dose distribution to the target, with beam shaping devices and image guided field targeting ensuring the fidelity and accuracy of treatment delivery. Ultimately, dose distribution conformity is limited by the maximum deliverable dose gradient; shallow dose gradients challenge techniques to deliver a tumoricidal radiation dose while minimizing dose to surrounding tissue. In this work, this ‘dose delivery resolution’ observation is rigorously formalized for a general dose delivery model based on the superposition of dose kernel primitives. It is proven that the spatial resolution of a delivered dose is bounded by the spatial frequency content of the underlying dose kernel, which in turn defines a lower bound in the minimization of a dose optimization objective function. In addition, it is shown that this optimization is penalized by a dose deposition strategy which enforces a constant relative phase (or constant spacing) between individual radiation beams. These results are further refined to provide a direct, analytic method to estimate the dose distribution arising from the minimization of such an optimization function. The efficacy of the overall framework is demonstrated on an image guided small animal microirradiator for a set of two-dimensional hypoxia guided dose prescriptions.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
NASA Technical Reports Server (NTRS)
Green, Robert O.
2001-01-01
Imaging spectroscopy offers a framework based in physics and chemistry for scientific investigation of a wide range of phenomena of interest in the Earth environment. In the scientific discipline of volcanology knowledge of lava temperature and distribution at the surface provides insight into the volcano status and subsurface processes. A remote sensing strategy to measure surface lava temperatures and distribution would support volcanology research. Hot targets such as molten lava emit spectral radiance as a function of temperature. A figure shows a series of Planck functions calculated radiance spectra for hot targets at different temperatures. A maximum Lambertian solar reflected radiance spectrum is shown as well. While similar in form, each hot target spectrum has a unique spectral shape and is distinct from the solar reflected radiance spectrum. Based on this temperature-dependent signature, imaging spectroscopy provides an innovative approach for the remote-sensing-based measurement of lava temperature. A natural site for investigation of the measurement of lava temperature is the Big Island of Hawaii where molten lava from the Kilauea vent is present at the surface. In the past, Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data sets have been used for the analysis of hot volcanic targets and hot burning fires. The research presented here builds upon and extends this earlier work. The year 2000 Hawaii AVIRIS data set has been analyzed to derive lava temperatures taking into account factors of fractional fill, solar reflected radiance, and atmospheric attenuation of the surface emitted radiance. The measurements, analyses, and current results for this research are presented here.
ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES
van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.
2014-01-01
SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298
Stock assessment of fishery target species in Lake Koka, Ethiopia.
Tesfaye, Gashaw; Wolff, Matthias
2015-09-01
Effective management is essential for small-scale fisheries to continue providing food and livelihoods for households, particularly in developing countries where other options are often limited. Studies on the population dynamics and stock assessment on fishery target species are thus imperative to sustain their fisheries and the benefits for the society. In Lake Koka (Ethiopia), very little is known about the vital population parameters and exploitation status of the fishery target species: tilapia Oreochromis niloticus, common carp Cyprinus carpio and catfish Clarias gariepinus. Our study, therefore, aimed at determining the vital population parameters and assessing the status of these target species in Lake Koka using length frequency data collected quarterly from commercial catches from 2007-2012. A total of 20,097 fish specimens (distributed as 7,933 tilapia, 6,025 catfish and 6,139 common carp) were measured for the analysis. Von Bertalarffy growth parameters and their confidence intervals were determined from modal progression analysis using ELEFAN I and applying the jackknife technique. Mortality parameters were determined from length-converted catch curves and empirical models. The exploitation status of these target species were then assessed by computing exploitation rates (E) from mortality parameters as well as from size indicators i.e., assessing the size distribution of fish catches relative to the size at maturity (Lm), the size that provides maximum cohort biomass (Lopt) and the abundance of mega-spawners. The mean value of growth parameters L∞, K and the growth performance index ø' were 44.5 cm, 0.41/year and 2.90 for O. niloticus, 74.1 cm, 0.28/year and 3.19 for C. carpio and 121.9 cm, 0.16/year and 3.36 for C. gariepinus, respectively. The 95 % confidence intervals of the estimates were also computed. Total mortality (Z) estimates were 1.47, 0.83 and 0.72/year for O. niloticus, C. carpio and C. gariepinus, respectively. Our study suggest that O. niloticus is in a healthy state, while C. gariepinus show signs of growth overfishing (when both exploitation rate (E) and size indicators were considered). In case of C. carpio, the low exploitation rate encountered would point to underfishing, while the size indicators of the catches would suggest that too small fish are harvested leading to growth overfishing. We concluded that fisheries production in Lake Koka could be enhanced by increasing E toward optimum level of exploitation (Eopt) for the underexploited C. carpio and by increasing the size at first capture (Lc) toward the Lopt, range for all target species.
Comparing host and target environments for distributed Ada programs
NASA Technical Reports Server (NTRS)
Paulk, Mark C.
1986-01-01
The Ada programming language provides a means of specifying logical concurrency by using multitasking. Extending the Ada multitasking concurrency mechanism into a physically concurrent distributed environment which imposes its own requirements can lead to incompatibilities. These problems are discussed. Using distributed Ada for a target system may be appropriate, but when using the Ada language in a host environment, a multiprocessing model may be more suitable than retargeting an Ada compiler for the distributed environment. The tradeoffs between multitasking on distributed targets and multiprocessing on distributed hosts are discussed. Comparisons of the multitasking and multiprocessing models indicate different areas of application.
Range estimation of passive infrared targets through the atmosphere
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan; Seo, Doochun; Choi, Seokweon
2013-04-01
Target range estimation is traditionally based on radar and active sonar systems in modern combat systems. However, jamming signals tremendously degrade the performance of such active sensor devices. We introduce a simple target range estimation method and the fundamental limits of the proposed method based on the atmosphere propagation model. Since passive infrared (IR) sensors measure IR signals radiating from objects in different wavelengths, this method has robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and various attenuation factors (i.e., the distance between sensor and target and atmosphere environment parameters). MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the results from MODTRAN and atmosphere propagation-based modeling, the target range can be estimated. To analyze the proposed method's performance statistically, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao lower bound (CRLB) via the probability density function of measured radiance. We also compare CRLB and the variance of MLE using Monte-Carlo simulation.
Klimstra, J.D.; O'Connell, A.F.; Pistrang, M.J.; Lewis, L.M.; Herrig, J.A.; Sauer, J.R.
2007-01-01
Science-based monitoring of biological resources is important for a greater understanding of ecological systems and for assessment of the target population using theoretic-based management approaches. When selecting variables to monitor, managers first need to carefully consider their objectives, the geographic and temporal scale at which they will operate, and the effort needed to implement the program. Generally, monitoring can be divided into two categories: index and inferential. Although index monitoring is usually easier to implement, analysis of index data requires strong assumptions about consistency in detection rates over time and space, and parameters are often biasednot accounting for detectability and spatial variation. In most cases, individuals are not always available for detection during sampling periods, and the entire area of interest cannot be sampled. Conversely, inferential monitoring is more rigorous because it is based on nearly unbiased estimators of spatial distribution. Thus, we recommend that detectability and spatial variation be considered for all monitoring programs that intend to make inferences about the target population or the area of interest. Application of these techniques is especially important for the monitoring of Threatened and Endangered (T&E) species because it is critical to determine if population size is increasing or decreasing with some level of certainty. Use of estimation-based methods and probability sampling will reduce many of the biases inherently associated with index data and provide meaningful information with respect to changes that occur in target populations. We incorporated inferential monitoring into protocols for T&E species spanning a wide range of taxa on the Cherokee National Forest in the Southern Appalachian Mountains. We review the various approaches employed for different taxa and discuss design issues, sampling strategies, data analysis, and the details of estimating detectability using site occupancy. These techniques provide a science-based approach for monitoring and can be of value to all resource managers responsible for management of T&E species.
Van Wynsberge, Simon; Andréfouët, Serge; Hamel, Mélanie A.; Kulbicki, Michel
2012-01-01
Species check-lists are helpful to establish Marine Protected Areas (MPAs) and protect local richness, endemicity, rarity, and biodiversity in general. However, such exhaustive taxonomic lists (i.e., true surrogate of biodiversity) require extensive and expensive censuses, and the use of estimator surrogates (e.g., habitats) is an appealing alternative. In truth, surrogate effectiveness appears from the literature highly variable both in marine and terrestrial ecosystems, making it difficult to provide practical recommendations for managers. Here, we evaluate how the biodiversity reference data set and its inherent bias can influence effectiveness. Specifically, we defined habitats by geomorphology, rugosity, and benthic cover and architecture criteria, and mapped them with satellite images for a New-Caledonian site. Fish taxonomic and functional lists were elaborated from Underwater Visual Censuses, stratified according to geomorphology and exposure. We then tested if MPA networks designed to maximize habitat richness, diversity and rarity could also effectively maximize fish richness, diversity, and rarity. Effectiveness appeared highly sensitive to the fish census design itself, in relation to the type of habitat map used and the scale of analysis. Spatial distribution of habitats (estimator surrogate’s distribution), quantity and location of fish census stations (target surrogate’s sampling), and random processes in the MPA design all affected effectiveness to the point that one small change in the data set could lead to opposite conclusions. We suggest that previous conclusions on surrogacy effectiveness, either positive or negative, marine or terrestrial, should be considered with caution, except in instances where very dense data sets were used without pseudo-replication. Although this does not rule out the validity of using surrogates of species lists for conservation planning, the critical joint examination of both target and estimator surrogates is needed for every case study. PMID:22815891
Edwards, Dylan; Cortes, Mar; Datta, Abhishek; Minhas, Preet; Wassermann, Eric M.; Bikson, Marom
2015-01-01
Transcranial Direct Current Stimulation (tDCS) is a non-invasive, low-cost, well-tolerated technique producing lasting modulation of cortical excitability. Behavioral and therapeutic outcomes of tDCS are linked to the targeted brain regions, but there is little evidence that current reaches the brain as intended. We aimed to: (1) validate a computational model for estimating cortical electric fields in human transcranial stimulation, and (2) assess the magnitude and spread of cortical electric field with a novel High-Definition tDCS (HD-tDCS) scalp montage using a 4×1-Ring electrode configuration. In three healthy adults, Transcranial Electrical Stimulation (TES) over primary motor cortex (M1) was delivered using the 4×1 montage (4× cathode, surrounding a single central anode; montage radius ~3 cm) with sufficient intensity to elicit a discrete muscle twitch in the hand. The estimated current distribution in M1 was calculated using the individualized MRI-based model, and compared with the observed motor response across subjects. The response magnitude was quantified with stimulation over motor cortex as well as anterior and posterior to motor cortex. In each case the model data were consistent with the motor response across subjects. The estimated cortical electric fields with the 4×1 montage were compared (area, magnitude, direction) for TES and tDCS in each subject. We provide direct evidence in humans that TES with a 4×1-Ring configuration can activate motor cortex and that current does not substantially spread outside the stimulation area. Computational models predict that both TES and tDCS waveforms using the 4×1-Ring configuration generate electric fields in cortex with comparable gross current distribution, and preferentially directed normal (inward) currents. The agreement of modeling and experimental data for both current delivery and focality support the use of the HD-tDCS 4×1-Ring montage for cortically targeted neuromodulation. PMID:23370061
Vexler, Vladimir; Yu, Li; Pamulapati, Chandrasena; Garrido, Rosario; Grimm, Hans Peter; Sriraman, Priya; Bohini, Sandhya; Schraeml, Michael; Singh, Usha; Brandt, Michael; Ries, Stefan; Ma, Han; Klumpp, Klaus; Ji, Changhua
2013-01-01
CD81 is an essential receptor for hepatitis C virus (HCV). K21 is a novel high affinity anti-CD81 antibody with potent broad spectrum anti-HCV activity in vitro. The pharmacokinetics (PK), pharmacodynamics and liver distribution of K21 were characterized in cynomolgus monkeys after intravenous (i.v.) administration of K21. Characteristic target-mediated drug disposition (TMDD) was shown based on the PK profile of K21 and a semi-mechanistic TMDD model was used to analyze the data. From the TMDD model, the estimated size of the total target pool at baseline (Vc • Rbase) is 16 nmol/kg and the estimated apparent Michaelis-Menten constant (KM) is 4.01 nM. A simulation using estimated TMDD parameters indicated that the number of free receptors remains below 1% for at least 3 h after an i.v. bolus of 7 mg/kg. Experimentally, the availability of free CD81 on peripheral lymphocytes was measured by immunostaining with anti-CD81 antibody JS81. After K21 administration, a dose- and time-dependent reduction in free CD81 on peripheral lymphocytes was observed. Fewer than 3% of B cells could bind JS81 3 h after a 7 mg/kg dose. High concentrations of K21 were found in liver homogenates, and the liver/serum ratio of K21 increased time-dependently and reached ~160 at 168 h post-administration. The presence of K21 bound to hepatocytes was confirmed by immunohistochemistry. The fast serum clearance of K21 and accumulation in the liver are consistent with TMDD. The TMDD-driven liver accumulation of the anti-CD81 antibody K21 supports the further investigation of K21 as a therapeutic inhibitor of HCV entry. PMID:23924796
McClanahan, Timothy R; Maina, Joseph M; Graham, Nicholas A J; Jones, Kendall R
2016-01-01
Fish biomass is a primary driver of coral reef ecosystem services and has high sensitivity to human disturbances, particularly fishing. Estimates of fish biomass, their spatial distribution, and recovery potential are important for evaluating reef status and crucial for setting management targets. Here we modeled fish biomass estimates across all reefs of the western Indian Ocean using key variables that predicted the empirical data collected from 337 sites. These variables were used to create biomass and recovery time maps to prioritize spatially explicit conservation actions. The resultant fish biomass map showed high variability ranging from ~15 to 2900 kg/ha, primarily driven by human populations, distance to markets, and fisheries management restrictions. Lastly, we assembled data based on the age of fisheries closures and showed that biomass takes ~ 25 years to recover to typical equilibrium values of ~1200 kg/ha. The recovery times to biomass levels for sustainable fishing yields, maximum diversity, and ecosystem stability or conservation targets once fishing is suspended was modeled to estimate temporal costs of restrictions. The mean time to recovery for the whole region to the conservation target was 8.1(± 3SD) years, while recovery to sustainable fishing thresholds was between 0.5 and 4 years, but with high spatial variation. Recovery prioritization scenario models included one where local governance prioritized recovery of degraded reefs and two that prioritized minimizing recovery time, where countries either operated independently or collaborated. The regional collaboration scenario selected remote areas for conservation with uneven national responsibilities and spatial coverage, which could undermine collaboration. There is the potential to achieve sustainable fisheries within a decade by promoting these pathways according to their social-ecological suitability.
Vulnerability of dynamic genetic conservation units of forest trees in Europe to climate change.
Schueler, Silvio; Falk, Wolfgang; Koskela, Jarkko; Lefèvre, François; Bozzano, Michele; Hubert, Jason; Kraigher, Hojka; Longauer, Roman; Olrik, Ditte C
2014-05-01
A transnational network of genetic conservation units for forest trees was recently documented in Europe aiming at the conservation of evolutionary processes and the adaptive potential of natural or man-made tree populations. In this study, we quantified the vulnerability of individual conservation units and the whole network to climate change using climate favourability models and the estimated velocity of climate change. Compared to the overall climate niche of the analysed target species populations at the warm and dry end of the species niche are underrepresented in the network. However, by 2100, target species in 33-65 % of conservation units, mostly located in southern Europe, will be at the limit or outside the species' current climatic niche as demonstrated by favourabilities below required model sensitivities of 95%. The highest average decrease in favourabilities throughout the network can be expected for coniferous trees although they are mainly occurring within units in mountainous landscapes for which we estimated lower velocities of change. Generally, the species-specific estimates of favourabilities showed only low correlations to the velocity of climate change in individual units, indicating that both vulnerability measures should be considered for climate risk analysis. The variation in favourabilities among target species within the same conservation units is expected to increase with climate change and will likely require a prioritization among co-occurring species. The present results suggest that there is a strong need to intensify monitoring efforts and to develop additional conservation measures for populations in the most vulnerable units. Also, our results call for continued transnational actions for genetic conservation of European forest trees, including the establishment of dynamic conservation populations outside the current species distribution ranges within European assisted migration schemes. © 2013 John Wiley & Sons Ltd.
McClanahan, Timothy R.; Maina, Joseph M.; Graham, Nicholas A. J.; Jones, Kendall R.
2016-01-01
Fish biomass is a primary driver of coral reef ecosystem services and has high sensitivity to human disturbances, particularly fishing. Estimates of fish biomass, their spatial distribution, and recovery potential are important for evaluating reef status and crucial for setting management targets. Here we modeled fish biomass estimates across all reefs of the western Indian Ocean using key variables that predicted the empirical data collected from 337 sites. These variables were used to create biomass and recovery time maps to prioritize spatially explicit conservation actions. The resultant fish biomass map showed high variability ranging from ~15 to 2900 kg/ha, primarily driven by human populations, distance to markets, and fisheries management restrictions. Lastly, we assembled data based on the age of fisheries closures and showed that biomass takes ~ 25 years to recover to typical equilibrium values of ~1200 kg/ha. The recovery times to biomass levels for sustainable fishing yields, maximum diversity, and ecosystem stability or conservation targets once fishing is suspended was modeled to estimate temporal costs of restrictions. The mean time to recovery for the whole region to the conservation target was 8.1(± 3SD) years, while recovery to sustainable fishing thresholds was between 0.5 and 4 years, but with high spatial variation. Recovery prioritization scenario models included one where local governance prioritized recovery of degraded reefs and two that prioritized minimizing recovery time, where countries either operated independently or collaborated. The regional collaboration scenario selected remote areas for conservation with uneven national responsibilities and spatial coverage, which could undermine collaboration. There is the potential to achieve sustainable fisheries within a decade by promoting these pathways according to their social-ecological suitability. PMID:27149673
Eckfeldt, John H; Karger, Amy B; Miller, W Greg; Rynders, Gregory P; Inker, Lesley A
2015-07-01
Cystatin C is becoming an increasingly popular biomarker for estimating glomerular filtration rate, and accurate measurements of cystatin C concentrations are necessary for accurate estimates of glomerular filtration rate. To assess the accuracy of cystatin C concentration measurements in laboratories participating in the College of American Pathologists CYS Survey. Two fresh frozen serum pools, the first from apparently healthy donors and the second from patients with chronic kidney disease, were prepared and distributed to laboratories participating in the CYS Survey along with the 2 usual processed human plasma samples. Target values were established for each pool by using 2 immunoassays and ERM DA471/IFCC international reference material. For the normal fresh frozen pool (ERM-DA471/IFCC-traceable target of 0.960 mg/L), the all-method mean (SD, % coefficient of variation [CV]) reported by all of the 123 reporting laboratories was 0.894 mg/L (0.128 mg/L, 14.3%). For the chronic kidney disease pool (ERM-DA471/IFCC-traceable target of 2.37 mg/L), the all-method mean (SD, %CV) was 2.258 mg/L (0.288 mg/L, 12.8%). There were substantial method-specific biases (mean milligram per liter reported for the normal pool was 0.780 for Siemens, 0.870 for Gentian, 0.967 for Roche, 1.061 for Diazyme, and 0.970 for other/not specified reagents; and mean milligram per liter reported for the chronic kidney disease pool was 2.052 for Siemens, 2.312 for Gentian, 2.247 for Roche, 2.909 for Diazyme, and 2.413 for other/not specified reagents). Manufacturers need to improve the accuracy of cystatin C measurement procedures if cystatin C is to achieve its full potential as a biomarker for estimating glomerular filtration rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Pengpeng; Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY; Wu, Leester
Purpose: To integrate imaging performance characteristics, specifically sensitivity and specificity, of magnetic resonance angiography (MRA) and digital subtraction angiography (DSA) into arteriovenous malformation (AVM) radiosurgery planning and evaluation. Methods and Materials: Images of 10 patients with AVMs located in critical brain areas were analyzed in this retrospective planning study. The image findings were first used to estimate the sensitivity and specificity of MRA and DSA. Instead of accepting the imaging observation as a binary (yes or no) mapping of AVM location, our alternative is to translate the image into an AVM probability distribution map by incorporating imagers' sensitivity and specificity,more » and to use this map as a basis for planning and evaluation. Three sets of radiosurgery plans, targeting the MRA and DSA positive overlap, MRA positive, and DSA positive were optimized for best conformality. The AVM obliteration rate (ORAVM) and brain complication rate served as endpoints for plan comparison. Results: In our 10-patient study, the specificities and sensitivities of MRA and DSA were estimated to be (0.95, 0.74) and (0.71, 0.95), respectively. The positive overlap of MRA and DSA accounted for 67.8% {+-} 4.9% of the estimated true AVM volume. Compared with plans targeting MRA and DSA-positive overlap, plans targeting MRA-positive or DSA-positive improved ORAVM by 4.1% {+-} 1.9% and 15.7% {+-} 8.3%, while also increasing the complication rate by 1.0% {+-} 0.8% and 4.4% {+-} 2.3%, respectively. Conclusions: The impact of imagers' quality should be quantified and incorporated in AVM radiosurgery planning and evaluation to facilitate clinical decision making.« less
The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Yan, Tsun-Yee
2000-01-01
This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.
Robust optimization based upon statistical theory.
Sobotta, B; Söhn, M; Alber, M
2010-08-01
Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose distributions that are robust against interfraction and intrafraction motion alike, effectively removing the need for indiscriminate safety margins.
Pose estimation of industrial objects towards robot operation
NASA Astrophysics Data System (ADS)
Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu
2017-10-01
With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.
Fast and accurate spectral estimation for online detection of partial broken bar in induction motors
NASA Astrophysics Data System (ADS)
Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti
2018-01-01
In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.
Ground target recognition using rectangle estimation.
Grönwall, Christina; Gustafsson, Fredrik; Millnert, Mille
2006-11-01
We propose a ground target recognition method based on 3-D laser radar data. The method handles general 3-D scattered data. It is based on the fact that man-made objects of complex shape can be decomposed to a set of rectangles. The ground target recognition method consists of four steps; 3-D size and orientation estimation, target segmentation into parts of approximately rectangular shape, identification of segments that represent the target's functional/main parts, and target matching with CAD models. The core in this approach is rectangle estimation. The performance of the rectangle estimation method is evaluated statistically using Monte Carlo simulations. A case study on tank recognition is shown, where 3-D data from four fundamentally different types of laser radar systems are used. Although the approach is tested on rather few examples, we believe that the approach is promising.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
Accuracy of parameter estimates for closely spaced optical targets using multiple detectors
NASA Astrophysics Data System (ADS)
Dunn, K. P.
1981-10-01
In order to obtain the cross-scan position of an optical target, more than one scanning detector is used. As expected, the cross-scan position estimation performance degrades when two nearby optical targets interfere with each other. Theoretical bounds on the two-dimensional parameter estimation performance for two closely spaced optical targets are found. Two particular classes of scanning detector arrays, namely, the crow's foot and the brickwall (or mosaic) patterns, are considered.
Estimating the Probability of a Diffusing Target Encountering a Stationary Sensor.
1985-07-01
7 RD-R1577 6- 44 ESTIMATING THE PROBABILITY OF A DIFFUSING TARGET i/i ENCOUNTERING R STATIONARY SENSOR(U) NAVAL POSTGRADUATE U SCHOOL MONTEREY CA...8217,: *.:.; - -*.. ,’.-,:;;’.’.. ’,. ,. .*.’.- 4 6 6- ..- .-,,.. : .-.;.- -. NPS55-85-013 NAVAL POSTGRADUATE SCHOOL Monterey, California ESTIMATING THE PROBABILITY OF A DIFFUSING TARGET...PROBABILITY OF A DIFFUSING Technical TARGET ENCOUNTERING A STATIONARY SENSOR S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(@) S. CONTRACT OR GRANT NUMBER(a
Namazi-Rad, Mohammad-Reza; Mokhtarian, Payam; Perez, Pascal
2014-01-01
Generating a reliable computer-simulated synthetic population is necessary for knowledge processing and decision-making analysis in agent-based systems in order to measure, interpret and describe each target area and the human activity patterns within it. In this paper, both synthetic reconstruction (SR) and combinatorial optimisation (CO) techniques are discussed for generating a reliable synthetic population for a certain geographic region (in Australia) using aggregated- and disaggregated-level information available for such an area. A CO algorithm using the quadratic function of population estimators is presented in this paper in order to generate a synthetic population while considering a two-fold nested structure for the individuals and households within the target areas. The baseline population in this study is generated from the confidentialised unit record files (CURFs) and 2006 Australian census tables. The dynamics of the created population is then projected over five years using a dynamic micro-simulation model for individual- and household-level demographic transitions. This projection is then compared with the 2011 Australian census. A prediction interval is provided for the population estimates obtained by the bootstrapping method, by which the variability structure of a predictor can be replicated in a bootstrap distribution. PMID:24733522
Pepin, Kim M; Eisen, Rebecca J; Mead, Paul S; Piesman, Joseph; Fish, Durland; Hoen, Anne G; Barbour, Alan G; Hamer, Sarah; Diuk-Wasser, Maria A
2012-06-01
Prevention and control of Lyme disease is difficult because of the complex biology of the pathogen's (Borrelia burgdorferi) vector (Ixodes scapularis) and multiple reservoir hosts with varying degrees of competence. Cost-effective implementation of tick- and host-targeted control methods requires an understanding of the relationship between pathogen prevalence in nymphs, nymph abundance, and incidence of human cases of Lyme disease. We quantified the relationship between estimated acarological risk and human incidence using county-level human case data and nymphal prevalence data from field-derived estimates in 36 eastern states. The estimated density of infected nymphs (mDIN) was significantly correlated with human incidence (r = 0.69). The relationship was strongest in high-prevalence areas, but it varied by region and state, partly because of the distribution of B. burgdorferi genotypes. More information is needed in several high-prevalence states before DIN can be used for cost-effectiveness analyses.
Inventory and transport of plastic debris in the Laurentian Great Lakes.
Hoffman, Matthew J; Hittinger, Eric
2017-02-15
Plastic pollution in the world's oceans has received much attention, but there has been increasing concern about the high concentrations of plastic debris in the Laurentian Great Lakes. Using census data and methodologies used to study ocean debris we derive a first estimate of 9887 metric tonnes per year of plastic debris entering the Great Lakes. These estimates are translated into population-dependent particle inputs which are advected using currents from a hydrodynamic model to map the spatial distribution of plastic debris in the Great Lakes. Model results compare favorably with previously published sampling data. The samples are used to calibrate the model to derive surface microplastic mass estimates of 0.0211 metric tonnes in Lake Superior, 1.44 metric tonnes in Huron, and 4.41 metric tonnes in Erie. These results have many applications, including informing cleanup efforts, helping target pollution prevention, and understanding the inter-state or international flows of plastic pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Haizhou; Zhang, Yanwen; Zhu, Zihua
Single crystalline 6H-SiC samples were irradiated at 150 K using 2MeV Pt ions. Local volume swelling is determined by electron energy loss spectroscopy (EELS), a nearly sigmoidal dependence with irradiation dose is observed. The disorder profiles and ion distribution are determined by Rutherford backscattering spectrometry (RBS), transmission electron microscopy and secondary ion mass spectrum. Since the volume swelling reaches 12% over the damage region under high ion fluence, lattice expansion is considered and corrected during the data analysis of RBS spectra to obtain depth profiles. Projectile and damage profiles are estimated by SRIM (Stopping and Range of Ions in Matter).more » Comparing with the measured profiles, SRIM code significantly overestimates the electronic stopping power for the slow heavy Pt ions, and large derivations are observed in the predicted ion distribution and the damage profiles. Utilizing the reciprocity method that is based on the invariance of the inelastic excitation in ion atom collisions against interchange of projectile and target, much lower electronic stopping is deduced. A simple approach based on reducing the density of SiC target in SRIM simulation is proposed to compensate the overestimated SRIM electronic stopping power values. Better damage profile and ion range are predicted.« less
Two months of disdrometer data in the Paris area
NASA Astrophysics Data System (ADS)
Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel
2018-05-01
The Hydrology, Meteorology, and Complexity laboratory of École des Ponts ParisTech (hmco.enpc.fr) has made a data set of optical disdrometer measurements available that come from a campaign involving three collocated devices from two different manufacturers, relying on different underlying technologies (one Campbell Scientific PWS100 and two OTT Parsivel2 instruments). The campaign took place in January-February 2016 in the Paris area (France). Disdrometers provide access to information on the size and velocity of drops falling through the sampling area of the devices of roughly a few tens of cm2. It enables the drop size distribution to be estimated and rainfall microphysics, kinetic energy, or radar quantities, for example, to be studied further. Raw data, i.e. basically a matrix containing a number of drops according to classes of size and velocity, along with more aggregated ones, such as the rain rate or drop size distribution with filtering, are available. Link to the data set: https://zenodo.org/record/1240168 (DOI: https://doi.org/10.5281/zenodo.1240168).
Calibration of NMR well logs from carbonate reservoirs with laboratory NMR measurements and μXRCT
Mason, Harris E.; Smith, Megan M.; Hao, Yue; ...
2014-12-31
The use of nuclear magnetic resonance (NMR) well log data has the potential to provide in-situ porosity, pore size distributions, and permeability of target carbonate CO₂ storage reservoirs. However, these methods which have been successfully applied to sandstones have yet to be completely validated for carbonate reservoirs. Here, we have taken an approach to validate NMR measurements of carbonate rock cores with independent measurements of permeability and pore surface area to volume (S/V) distributions using differential pressure measurements and micro X-ray computed tomography (μXRCT) imaging methods, respectively. We observe that using standard methods for determining permeability from NMR data incorrectlymore » predicts these values by orders of magnitude. However, we do observe promise that NMR measurements provide reasonable estimates of pore S/V distributions, and with further independent measurements of the carbonate rock properties that universally applicable relationships between NMR measured properties may be developed for in-situ well logging applications of carbonate reservoirs.« less
Calibration of NMR well logs from carbonate reservoirs with laboratory NMR measurements and μXRCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, Harris E.; Smith, Megan M.; Hao, Yue
The use of nuclear magnetic resonance (NMR) well log data has the potential to provide in-situ porosity, pore size distributions, and permeability of target carbonate CO₂ storage reservoirs. However, these methods which have been successfully applied to sandstones have yet to be completely validated for carbonate reservoirs. Here, we have taken an approach to validate NMR measurements of carbonate rock cores with independent measurements of permeability and pore surface area to volume (S/V) distributions using differential pressure measurements and micro X-ray computed tomography (μXRCT) imaging methods, respectively. We observe that using standard methods for determining permeability from NMR data incorrectlymore » predicts these values by orders of magnitude. However, we do observe promise that NMR measurements provide reasonable estimates of pore S/V distributions, and with further independent measurements of the carbonate rock properties that universally applicable relationships between NMR measured properties may be developed for in-situ well logging applications of carbonate reservoirs.« less
Optimizing Distribution of Pandemic Influenza Antiviral Drugs
Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.
2015-01-01
We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858
NASA Astrophysics Data System (ADS)
Kurkuchekov, V.; Kandaurov, I.; Trunev, Y.
2018-05-01
A simple and inexpensive X-ray diagnostic tool was designed for measuring the cross-sectional current density distribution in a low-relativistic pulsed electron beam produced in a source based on an arc-discharge plasma cathode and multiaperture diode-type electron optical system. The beam parameters were as follows: Uacc = 50–110 kV, Ibeam = 20–100 A, τbeam = 0.1–0.3 ms. The beam effective diameter was ca. 7 cm. Based on a pinhole camera, the diagnostic allows one to obtain a 2D profile of electron beam flux distribution on a flat metal target in a single shot. The linearity of the diagnostic system response to the electron flux density was established experimentally. Spatial resolution of the diagnostic was also estimated in special test experiments. The optimal choice of the main components of the diagnostic technique is discussed.
SAR target recognition and posture estimation using spatial pyramid pooling within CNN
NASA Astrophysics Data System (ADS)
Peng, Lijiang; Liu, Xiaohua; Liu, Ming; Dong, Liquan; Hui, Mei; Zhao, Yuejin
2018-01-01
Many convolution neural networks(CNN) architectures have been proposed to strengthen the performance on synthetic aperture radar automatic target recognition (SAR-ATR) and obtained state-of-art results on targets classification on MSTAR database, but few methods concern about the estimation of depression angle and azimuth angle of targets. To get better effect on learning representation of hierarchies of features on both 10-class target classification task and target posture estimation tasks, we propose a new CNN architecture with spatial pyramid pooling(SPP) which can build high hierarchy of features map by dividing the convolved feature maps from finer to coarser levels to aggregate local features of SAR images. Experimental results on MSTAR database show that the proposed architecture can get high recognition accuracy as 99.57% on 10-class target classification task as the most current state-of-art methods, and also get excellent performance on target posture estimation tasks which pays attention to depression angle variety and azimuth angle variety. What's more, the results inspire us the application of deep learning on SAR target posture description.
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics.
Keich, Uri; Kertesz-Farkas, Attila; Noble, William Stafford
2015-08-07
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications.
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics
2016-01-01
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications. PMID:26152888
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
NASA Astrophysics Data System (ADS)
Chen, C. F.; Liang, C. P.; Jang, C. S.; Chen, J. S.
2016-12-01
Groundwater is one of the most component water resources in Lanyang plain. The groundwater of the Lanyang Plain contains arsenic levels that exceed the current Taiwan Environmental Protection Administration (Taiwan EPA) limit of 10 μg/L. The arsenic of groundwater in some areas of the Lanyang Plain pose great menace for the safe use of groundwater resources. Therefore, poor water quality can adversely impact drinking water uses, leading to human health risks. This study analyzed the potential health risk associated with the ingestion of arsenic-affected groundwater in the arseniasis-endemic Lanyang plain. Geostatistical approach is widely used in spatial variability analysis and distributions of field data with uncertainty. The estimation of spatial distribution of the arsenic contaminant in groundwater is very important in the health risk assessment. This study used indicator kriging (IK) and ordinary kriging (OK) methods to explore the spatial variability of arsenic-polluted parameters. The estimated difference between IK and OK estimates was compared. The extent of arsenic pollution was spatially determined and the Target cancer risk (TR) and dose response were explored when the ingestion of arsenic in groundwater. Thus, a zonal management plan based on safe groundwater use is formulated. The research findings can provide a plan reference of regional water resources supplies for local government administrators and developing groundwater resources in the Lanyang Plain.
Costs of food waste along the value chain: evidence from South Africa.
Nahman, Anton; de Lange, Willem
2013-11-01
In a previous paper (Nahman et al., 2012), the authors estimated the costs of household food waste in South Africa, based on the market value of the wasted food (edible portion only), as well as the costs of disposal to landfill. In this paper, we extend the analysis by assessing the costs of edible food waste throughout the entire food value chain, from agricultural production through to consumption at the household level. First, food waste at each stage of the value chain was quantified in physical units (tonnes) for various food commodity groups. Then, weighted average representative prices (per tonne) were estimated for each commodity group at each stage of the value chain. Finally, prices were multiplied by quantities, and the resulting values were aggregated across the value chain for all commodity groups. In this way, the total cost of food waste across the food value chain in South Africa was estimated at R61.5 billion per annum (approximately US$7.7 billion); equivalent to 2.1% of South Africa's annual gross domestic product. The bulk of this cost arises from the processing and distribution stages of the fruit and vegetable value chain, as well as the agricultural production and distribution stages of the meat value chain. These results therefore provide an indication of where interventions aimed at reducing food waste should be targeted. Copyright © 2013 Elsevier Ltd. All rights reserved.
Application of Raman microscopy to biodegradable double-walled microspheres.
Widjaja, Effendi; Lee, Wei Li; Loo, Say Chye Joachim
2010-02-15
Raman mapping measurements were performed on the cross section of the ternary-phase biodegradable double-walled microsphere (DWMS) of poly(D,L-lactide-co-glycolide) (50:50) (PLGA), poly(L-lactide) (PLLA), and poly(epsilon-caprolactone) (PCL), which was fabricated by a one-step solvent evaporation method. The collected Raman spectra were subjected to a band-target entropy minimization (BTEM) algorithm in order to reconstruct the pure component spectra of the species observed in this sample. Seven pure component spectral estimates were recovered, and their spatial distributions within DWMS were determined. The first three spectral estimates were identified as PLLA, PLGA 50:50, and PCL, which were the main components in DWMS. The last four spectral estimates were identified as semicrystalline polyglycolic acid (PGA), dichloromethane (DCM), copper-phthalocyanine blue, and calcite, which were the minor components in DWMS. PGA was the decomposition product of PLGA. DCM was the solvent used in DWMS fabrication. Copper-phthalocyanine blue and calcite were the unexpected contaminants. The current result showed that combined Raman microscopy and BTEM analysis can provide a sensitive characterization tool to DWMS, as it can give more specific information on the chemical species present as well as the spatial distributions. This novel analytical method for microsphere characterization can serve as a complementary tool to other more established analytical techniques, such as scanning electron microscopy and optical microscopy.
Evolution of the cerebellum as a neuronal machine for Bayesian state estimation
NASA Astrophysics Data System (ADS)
Paulin, M. G.
2005-09-01
The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.
The Rings Survey. I. Hα and H I Velocity Maps of Galaxy NGC 2280
NASA Astrophysics Data System (ADS)
Mitchell, Carl J.; Williams, T. B.; Spekkens, Kristine; Lee-Waddell, K.; Kuzio de Naray, Rachel; Sellwood, J. A.
2015-03-01
Precise measurements of gas kinematics in the disk of a spiral galaxy can be used to estimate its mass distribution. The Southern African Large Telescope has a large collecting area and field of view, and is equipped with a Fabry-Pérot (FP) interferometer that can measure gas kinematics in a galaxy from the Hα line. To take advantage of this capability, we have constructed a sample of 19 nearby spiral galaxies, the RSS Imaging and Spectroscopy Nearby Galaxy Survey, as targets for detailed study of their mass distributions and have collected much of the needed data. In this paper, we present velocity maps produced from Hα FP interferometry and H i aperture synthesis for one of these galaxies, NGC 2280, and show that the two velocity measurements are generally in excellent agreement. Minor differences can mostly be attributed to the different spatial distributions of the excited and neutral gas in this galaxy, but we do detect some anomalous velocities in our Hα velocity map of the kind that have previously been detected in other galaxies. Models produced from our two velocity maps agree well with each other and our estimates of the systemic velocity and projection angles confirm previous measurements of these quantities for NGC 2280. Based in part on observations obtained with the Southern African Large Telescope (SALT) program 2011-3-RU-003.
NASA Astrophysics Data System (ADS)
Ravikumar, Arvind P.; Brandt, Adam R.
2017-04-01
Methane—a short-lived and potent greenhouse gas—presents a unique challenge: it is emitted from a large number of highly distributed and diffuse sources. In this regard, the United States’ Environmental Protection Agency (EPA) has recommended periodic leak detection and repair surveys at oil and gas facilities using optical gas imaging technology. This regulation requires an operator to fix all detected leaks within a set time period. Whether such ‘find-all-fix-all’ policies are effective depends on significant uncertainties in the character of emissions. In this work, we systematically analyze the effect of facility-related and mitigation-related uncertainties on regulation effectiveness. Drawing from multiple publicly-available datasets, we find that: (1) highly-skewed leak-size distributions strongly influence emissions reduction potential; (2) variations in emissions estimates across facilities leads to large variability in mitigation effectiveness; (3) emissions reductions from optical gas imaging-based leak detection programs can range from 15% to over 70%; and (4) while implementation costs are uniformly lower than EPA estimates, benefits from saved gas are highly variable. Combining empirical evidence with model results, we propose four policy options for effective methane mitigation: performance-oriented targets for accelerated emission reductions, flexible policy mechanisms to account for regional variation, technology-agnostic regulations to encourage adoption of the most cost-effective measures, and coordination with other greenhouse gas mitigation policies to reduce unintended spillover effects.
NASA Astrophysics Data System (ADS)
Kuroki, R.; Yamashiki, Y. A.; Varlamov, S.; Miyazawa, Y.; Gupta, H. V.; Racault, M.; Troselj, J.
2017-12-01
We estimated the effects of extreme fluvial outflow events from river mouths on the salinity distribution in the Japanese coastal zones. Targeted extreme event was a typhoon from 06/09/2015 to 12/09/2015, and we generated a set of hourly simulated river outflow data of all Japanese first-class rivers from these basins to the Pacific Ocean and the Sea of Japan during the period by using our model "Cell Distributed Runoff Model Version 3.1.1 (CDRMV3.1.1)". The model simulated fresh water discharges for the case of the typhoon passage over Japan. We used these data with a coupled hydrological-oceanographic model JCOPE-T, developed by Japan Agency for Marine-earth Science and Technology (JAMSTEC), for estimation of the circulation and salinity distribution in Japanese coastal zones. By using the model, the coastal oceanic circulation was reproduced adequately, which was verified by satellite remote sensing. In addition to this, we have successfully optimized 5 parameters, soil roughness coefficient, river roughness coefficient, effective porosity, saturated hydraulic conductivity, and effective rainfall by using Shuffled Complex Evolution method developed by University of Arizona (SCE-UA method), that is one of the optimization method for hydrological model. Increasing accuracy of peak discharge prediction of extreme typhoon events on river mouths is essential for continental-oceanic mutual interaction.
Trajectory prediction for ballistic missiles based on boost-phase LOS measurements
NASA Astrophysics Data System (ADS)
Yeddanapudi, Murali; Bar-Shalom, Yaakov
1997-10-01
This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamo, Masashi; Ono, Kyoko; Nakanishi, Junko
2006-05-15
A meta-analysis was conducted to derive age- and gender-specific dose-response relationships between urinary cadmium (Cd) concentration and {beta} {sub 2}-microglobulinuria ({beta}2MG-uria) under environmental exposure. {beta}2MG-uria was defined by a cutoff point of 1000 {mu}g {beta} {sub 2}-microglobulin/g creatinine. We proposed a model for describing the relationships among the interindividual variabilities in urinary Cd concentration, the ratio of Cd concentrations in the target organ and in urine, and the threshold Cd concentration in the target organ. The parameters in the model were determined so that good agreement might be achieved between the prevalence rates of {beta}2MG-uria reported in the literature andmore » those estimated by the model. In this analysis, only the data from the literature on populations environmentally exposed to Cd were used. Using the model and estimated parameters, the prevalence rate of {beta}2MG-uria can be estimated for an age- and gender-specific subpopulation for which the distribution of urinary Cd concentrations is known. The maximum permissible level of urinary Cd concentration was defined as the maximum geometric mean of the urinary Cd concentration in an age- and gender-specific subpopulation that would not result in a statistically significant increase in the prevalence rate of {beta}2MG-uria. This was estimated to be approximately 3 {mu}g/g creatinine for a population in a small geographical area and approximately 2 {mu}g/g creatinine for a nationwide population.« less
Survey design for lakes and reservoirs in the United States to assess contaminants in fish tissue.
Olsen, Anthony R; Snyder, Blaine D; Stahl, Leanne L; Pitt, Jennifer L
2009-03-01
The National Lake Fish Tissue Study (NLFTS) was the first survey of fish contamination in lakes and reservoirs in the 48 conterminous states based on a probability survey design. This study included the largest set (268) of persistent, bioaccumulative, and toxic (PBT) chemicals ever studied in predator and bottom-dwelling fish species. The U.S. Environmental Protection Agency (USEPA) implemented the study in cooperation with states, tribal nations, and other federal agencies, with field collection occurring at 500 lakes and reservoirs over a four-year period (2000-2003). The sampled lakes and reservoirs were selected using a spatially balanced unequal probability survey design from 270,761 lake objects in USEPA's River Reach File Version 3 (RF3). The survey design selected 900 lake objects, with a reserve sample of 900, equally distributed across six lake area categories. A total of 1,001 lake objects were evaluated to identify 500 lake objects that met the study's definition of a lake and could be accessed for sampling. Based on the 1,001 evaluated lakes, it was estimated that a target population of 147,343 (+/-7% with 95% confidence) lakes and reservoirs met the NLFTS definition of a lake. Of the estimated 147,343 target lakes, 47% were estimated not to be sampleable either due to landowner access denial (35%) or due to physical barriers (12%). It was estimated that a sampled population of 78,664 (+/-12% with 95% confidence) lakes met the NLFTS lake definition, had either predator or bottom-dwelling fish present, and could be sampled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brualla, Lorenzo, E-mail: lorenzo.brualla@uni-due.de; Zaragoza, Francisco J.; Sempau, Josep
Purpose: External beam radiotherapy is the only conservative curative approach for Stage I non-Hodgkin lymphomas of the conjunctiva. The target volume is geometrically complex because it includes the eyeball and lid conjunctiva. Furthermore, the target volume is adjacent to radiosensitive structures, including the lens, lacrimal glands, cornea, retina, and papilla. The radiotherapy planning and optimization requires accurate calculation of the dose in these anatomical structures that are much smaller than the structures traditionally considered in radiotherapy. Neither conventional treatment planning systems nor dosimetric measurements can reliably determine the dose distribution in these small irradiated volumes. Methods and Materials: The Montemore » Carlo simulations of a Varian Clinac 2100 C/D and human eye were performed using the PENELOPE and PENEASYLINAC codes. Dose distributions and dose volume histograms were calculated for the bulbar conjunctiva, cornea, lens, retina, papilla, lacrimal gland, and anterior and posterior hemispheres. Results: The simulated results allow choosing the most adequate treatment setup configuration, which is an electron beam energy of 6 MeV with additional bolus and collimation by a cerrobend block with a central cylindrical hole of 3.0 cm diameter and central cylindrical rod of 1.0 cm diameter. Conclusions: Monte Carlo simulation is a useful method to calculate the minute dose distribution in ocular tissue and to optimize the electron irradiation technique in highly critical structures. Using a voxelized eye phantom based on patient computed tomography images, the dose distribution can be estimated with a standard statistical uncertainty of less than 2.4% in 3 min using a computing cluster with 30 cores, which makes this planning technique clinically relevant.« less
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-05
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.
Aging persons' estimates of vehicular motion.
Schiff, W; Oldak, R; Shah, V
1992-12-01
Estimated arrival times of moving autos were examined in relation to viewer age, gender, motion trajectory, and velocity. Direct push-button judgments were compared with verbal estimates derived from velocity and distance, which were based on assumptions that perceivers compute arrival time from perceived distance and velocity. Experiment 1 showed that direct estimates of younger Ss were most accurate. Older women made the shortest (highly cautious) estimates of when cars would arrive. Verbal estimates were much lower than direct estimates, with little correlation between them. Experiment 2 extended target distances and velocities of targets, with the results replicating the main findings of Experiment 1. Judgment accuracy increased with target velocity, and verbal estimates were again poorer estimates of arrival time than direct ones, with different patterns of findings. Using verbal estimates to approximate judgments in traffic situations appears questionable.
A parallel implementation of a multisensor feature-based range-estimation method
NASA Technical Reports Server (NTRS)
Suorsa, Raymond E.; Sridhar, Banavar
1993-01-01
There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.
Hart-Smith, Gene; Yagoub, Daniel; Tay, Aidan P.; Pickford, Russell; Wilkins, Marc R.
2016-01-01
All large scale LC-MS/MS post-translational methylation site discovery experiments require methylpeptide spectrum matches (methyl-PSMs) to be identified at acceptably low false discovery rates (FDRs). To meet estimated methyl-PSM FDRs, methyl-PSM filtering criteria are often determined using the target-decoy approach. The efficacy of this methyl-PSM filtering approach has, however, yet to be thoroughly evaluated. Here, we conduct a systematic analysis of methyl-PSM FDRs across a range of sample preparation workflows (each differing in their exposure to the alcohols methanol and isopropyl alcohol) and mass spectrometric instrument platforms (each employing a different mode of MS/MS dissociation). Through 13CD3-methionine labeling (heavy-methyl SILAC) of Saccharomyces cerevisiae cells and in-depth manual data inspection, accurate lists of true positive methyl-PSMs were determined, allowing methyl-PSM FDRs to be compared with target-decoy approach-derived methyl-PSM FDR estimates. These results show that global FDR estimates produce extremely unreliable methyl-PSM filtering criteria; we demonstrate that this is an unavoidable consequence of the high number of amino acid combinations capable of producing peptide sequences that are isobaric to methylated peptides of a different sequence. Separate methyl-PSM FDR estimates were also found to be unreliable due to prevalent sources of false positive methyl-PSMs that produce high peptide identity score distributions. Incorrect methylation site localizations, peptides containing cysteinyl-S-β-propionamide, and methylated glutamic or aspartic acid residues can partially, but not wholly, account for these false positive methyl-PSMs. Together, these results indicate that the target-decoy approach is an unreliable means of estimating methyl-PSM FDRs and methyl-PSM filtering criteria. We suggest that orthogonal methylpeptide validation (e.g. heavy-methyl SILAC or its offshoots) should be considered a prerequisite for obtaining high confidence methyl-PSMs in large scale LC-MS/MS methylation site discovery experiments and make recommendations on how to reduce methyl-PSM FDRs in samples not amenable to heavy isotope labeling. Data are available via ProteomeXchange with the data identifier PXD002857. PMID:26699799
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Modeling integrated water user decisions in intermittent supply systems
NASA Astrophysics Data System (ADS)
Rosenberg, David E.; Tarawneh, Tarek; Abdel-Khaleq, Rania; Lund, Jay R.
2007-07-01
We apply systems analysis to estimate household water use in an intermittent supply system considering numerous interdependent water user behaviors. Some 39 household actions include conservation; improving local storage or water quality; and accessing sources having variable costs, availabilities, reliabilities, and qualities. A stochastic optimization program with recourse decisions identifies the infrastructure investments and short-term coping actions a customer can adopt to cost-effectively respond to a probability distribution of piped water availability. Monte Carlo simulations show effects for a population of customers. Model calibration reproduces the distribution of billed residential water use in Amman, Jordan. Parametric analyses suggest economic and demand responses to increased availability and alternative pricing. It also suggests potential market penetration for conservation actions, associated water savings, and subsidies to entice further adoption. We discuss new insights to size, target, and finance conservation.
Kozma, Robert; Wang, Lan; Iftekharuddin, Khan; McCracken, Ernest; Khan, Muhammad; Islam, Khandakar; Bhurtel, Sushil R.; Demirer, R. Murat
2012-01-01
The feasibility of using Commercial Off-The-Shelf (COTS) sensor nodes is studied in a distributed network, aiming at dynamic surveillance and tracking of ground targets. Data acquisition by low-cost (<$50 US) miniature low-power radar through a wireless mote is described. We demonstrate the detection, ranging and velocity estimation, classification and tracking capabilities of the mini-radar, and compare results to simulations and manual measurements. Furthermore, we supplement the radar output with other sensor modalities, such as acoustic and vibration sensors. This method provides innovative solutions for detecting, identifying, and tracking vehicles and dismounts over a wide area in noisy conditions. This study presents a step towards distributed intelligent decision support and demonstrates effectiveness of small cheap sensors, which can complement advanced technologies in certain real-life scenarios. PMID:22438713
Human thyroid specimen imaging by fluorescent x-ray computed tomography with synchrotron radiation
NASA Astrophysics Data System (ADS)
Takeda, Tohoru; Yu, Quanwen; Yashiro, Toru; Yuasa, Tetsuya; Hasegawa, Yasuo; Itai, Yuji; Akatsuka, Takao
1999-09-01
Fluorescent x-ray computed tomography (FXCT) is being developed to detect non-radioactive contrast materials in living specimens. The FXCT system consists of a silicon (111) channel cut monochromator, an x-ray slit and a collimator for fluorescent x ray detection, a scanning table for the target organ and an x-ray detector for fluorescent x-ray and transmission x-ray. To reduce Compton scattering overlapped on the fluorescent K(alpha) line, incident monochromatic x-ray was set at 37 keV. The FXCT clearly imaged a human thyroid gland and iodine content was estimated quantitatively. In a case of hyperthyroidism, the two-dimensional distribution of iodine content was not uniform, and thyroid cancer had a small amount of iodine. FXCT can be used to detect iodine within thyroid gland quantitatively and to delineate its distribution.
Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.
2013-01-01
A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777
Arthropod Distribution in a Tropical Rainforest: Tackling a Four Dimensional Puzzle.
Basset, Yves; Cizek, Lukas; Cuénoud, Philippe; Didham, Raphael K; Novotny, Vojtech; Ødegaard, Frode; Roslin, Tomas; Tishechkin, Alexey K; Schmidl, Jürgen; Winchester, Neville N; Roubik, David W; Aberlenc, Henri-Pierre; Bail, Johannes; Barrios, Héctor; Bridle, Jonathan R; Castaño-Meneses, Gabriela; Corbara, Bruno; Curletti, Gianfranco; Duarte da Rocha, Wesley; De Bakker, Domir; Delabie, Jacques H C; Dejean, Alain; Fagan, Laura L; Floren, Andreas; Kitching, Roger L; Medianero, Enrique; Gama de Oliveira, Evandro; Orivel, Jérôme; Pollet, Marc; Rapp, Mathieu; Ribeiro, Sérvio P; Roisin, Yves; Schmidt, Jesper B; Sørensen, Line; Lewinsohn, Thomas M; Leponce, Maurice
2015-01-01
Quantifying the spatio-temporal distribution of arthropods in tropical rainforests represents a first step towards scrutinizing the global distribution of biodiversity on Earth. To date most studies have focused on narrow taxonomic groups or lack a design that allows partitioning of the components of diversity. Here, we consider an exceptionally large dataset (113,952 individuals representing 5,858 species), obtained from the San Lorenzo forest in Panama, where the phylogenetic breadth of arthropod taxa was surveyed using 14 protocols targeting the soil, litter, understory, lower and upper canopy habitats, replicated across seasons in 2003 and 2004. This dataset is used to explore the relative influence of horizontal, vertical and seasonal drivers of arthropod distribution in this forest. We considered arthropod abundance, observed and estimated species richness, additive decomposition of species richness, multiplicative partitioning of species diversity, variation in species composition, species turnover and guild structure as components of diversity. At the scale of our study (2 km of distance, 40 m in height and 400 days), the effects related to the vertical and seasonal dimensions were most important. Most adult arthropods were collected from the soil/litter or the upper canopy and species richness was highest in the canopy. We compared the distribution of arthropods and trees within our study system. Effects related to the seasonal dimension were stronger for arthropods than for trees. We conclude that: (1) models of beta diversity developed for tropical trees are unlikely to be applicable to tropical arthropods; (2) it is imperative that estimates of global biodiversity derived from mass collecting of arthropods in tropical rainforests embrace the strong vertical and seasonal partitioning observed here; and (3) given the high species turnover observed between seasons, global climate change may have severe consequences for rainforest arthropods.
The redshift distribution of cosmological samples: a forward modeling approach
NASA Astrophysics Data System (ADS)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina
2017-08-01
Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
Arthropod Distribution in a Tropical Rainforest: Tackling a Four Dimensional Puzzle
Basset, Yves; Cizek, Lukas; Cuénoud, Philippe; Didham, Raphael K.; Novotny, Vojtech; Ødegaard, Frode; Roslin, Tomas; Tishechkin, Alexey K.; Schmidl, Jürgen; Winchester, Neville N.; Roubik, David W.; Aberlenc, Henri-Pierre; Bail, Johannes; Barrios, Héctor; Bridle, Jonathan R.; Castaño-Meneses, Gabriela; Corbara, Bruno; Curletti, Gianfranco; Duarte da Rocha, Wesley; De Bakker, Domir; Delabie, Jacques H. C.; Dejean, Alain; Fagan, Laura L.; Floren, Andreas; Kitching, Roger L.; Medianero, Enrique; Gama de Oliveira, Evandro; Orivel, Jérôme; Pollet, Marc; Rapp, Mathieu; Ribeiro, Sérvio P.; Roisin, Yves; Schmidt, Jesper B.; Sørensen, Line; Lewinsohn, Thomas M.; Leponce, Maurice
2015-01-01
Quantifying the spatio-temporal distribution of arthropods in tropical rainforests represents a first step towards scrutinizing the global distribution of biodiversity on Earth. To date most studies have focused on narrow taxonomic groups or lack a design that allows partitioning of the components of diversity. Here, we consider an exceptionally large dataset (113,952 individuals representing 5,858 species), obtained from the San Lorenzo forest in Panama, where the phylogenetic breadth of arthropod taxa was surveyed using 14 protocols targeting the soil, litter, understory, lower and upper canopy habitats, replicated across seasons in 2003 and 2004. This dataset is used to explore the relative influence of horizontal, vertical and seasonal drivers of arthropod distribution in this forest. We considered arthropod abundance, observed and estimated species richness, additive decomposition of species richness, multiplicative partitioning of species diversity, variation in species composition, species turnover and guild structure as components of diversity. At the scale of our study (2km of distance, 40m in height and 400 days), the effects related to the vertical and seasonal dimensions were most important. Most adult arthropods were collected from the soil/litter or the upper canopy and species richness was highest in the canopy. We compared the distribution of arthropods and trees within our study system. Effects related to the seasonal dimension were stronger for arthropods than for trees. We conclude that: (1) models of beta diversity developed for tropical trees are unlikely to be applicable to tropical arthropods; (2) it is imperative that estimates of global biodiversity derived from mass collecting of arthropods in tropical rainforests embrace the strong vertical and seasonal partitioning observed here; and (3) given the high species turnover observed between seasons, global climate change may have severe consequences for rainforest arthropods. PMID:26633187
The redshift distribution of cosmological samples: a forward modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam
Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizesmore » and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.« less
On the interplay effects with proton scanning beams in stage III lung cancer
Li, Yupeng; Kardar, Laleh; Li, Xiaoqiang; Li, Heng; Cao, Wenhua; Chang, Joe Y.; Liao, Li; Zhu, Ronald X.; Sahoo, Narayan; Gillin, Michael; Liao, Zhongxing; Komaki, Ritsuko; Cox, James D.; Lim, Gino; Zhang, Xiaodong
2014-01-01
Purpose: To assess the dosimetric impact of interplay between spot-scanning proton beam and respiratory motion in intensity-modulated proton therapy (IMPT) for stage III lung cancer. Methods: Eleven patients were sampled from 112 patients with stage III nonsmall cell lung cancer to well represent the distribution of 112 patients in terms of target size and motion. Clinical target volumes (CTVs) and planning target volumes (PTVs) were defined according to the authors' clinical protocol. Uniform and realistic breathing patterns were considered along with regular- and hypofractionation scenarios. The dose contributed by a spot was fully calculated on the computed tomography (CT) images corresponding to the respiratory phase that the spot is delivered, and then accumulated to the reference phase of the 4DCT to generate the dynamic dose that provides an estimation of what might be delivered under the influence of interplay effect. The dynamic dose distributions at different numbers of fractions were compared with the corresponding 4D composite dose which is the equally weighted average of the doses, respectively, computed on respiratory phases of a 4DCT image set. Results: Under regular fractionation, the average and maximum differences in CTV coverage between the 4D composite and dynamic doses after delivery of all 35 fractions were no more than 0.2% and 0.9%, respectively. The maximum differences between the two dose distributions for the maximum dose to the spinal cord, heart V40, esophagus V55, and lung V20 were 1.2 Gy, 0.1%, 0.8%, and 0.4%, respectively. Although relatively large differences in single fraction, correlated with small CTVs relative to motions, were observed, the authors' biological response calculations suggested that this interfractional dose variation may have limited biological impact. Assuming a hypofractionation scenario, the differences between the 4D composite and dynamic doses were well confined even for single fraction. Conclusions: Despite the presence of interplay effect, the delivered dose may be reliably estimated using the 4D composite dose. In general the interplay effect may not be a primary concern with IMPT for lung cancers for the authors' institution. The described interplay analysis tool may be used to provide additional confidence in treatment delivery. PMID:24506612
Monte Carlo simulations for angular and spatial distributions in therapeutic-energy proton beams
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Pan, C. Y.; Chiang, K. J.; Yuan, M. C.; Chu, C. H.; Tsai, Y. W.; Teng, P. K.; Lin, C. H.; Chao, T. C.; Lee, C. C.; Tung, C. J.; Chen, A. E.
2017-11-01
The purpose of this study is to compare the angular and spatial distributions of therapeutic-energy proton beams obtained from the FLUKA, GEANT4 and MCNP6 Monte Carlo codes. The Monte Carlo simulations of proton beams passing through two thin targets and a water phantom were investigated to compare the primary and secondary proton fluence distributions and dosimetric differences among these codes. The angular fluence distributions, central axis depth-dose profiles, and lateral distributions of the Bragg peak cross-field were calculated to compare the proton angular and spatial distributions and energy deposition. Benchmark verifications from three different Monte Carlo simulations could be used to evaluate the residual proton fluence for the mean range and to estimate the depth and lateral dose distributions and the characteristic depths and lengths along the central axis as the physical indices corresponding to the evaluation of treatment effectiveness. The results showed a general agreement among codes, except that some deviations were found in the penumbra region. These calculated results are also particularly helpful for understanding primary and secondary proton components for stray radiation calculation and reference proton standard determination, as well as for determining lateral dose distribution performance in proton small-field dosimetry. By demonstrating these calculations, this work could serve as a guide to the recent field of Monte Carlo methods for therapeutic-energy protons.
Climate Change and the Potential Distribution of an Invasive Shrub, Lantana camara L
Taylor, Subhashni; Kumar, Lalit; Reid, Nick; Kriticos, Darren J.
2012-01-01
The threat posed by invasive species, in particular weeds, to biodiversity may be exacerbated by climate change. Lantana camara L. (lantana) is a woody shrub that is highly invasive in many countries of the world. It has a profound economic and environmental impact worldwide, including Australia. Knowledge of the likely potential distribution of this invasive species under current and future climate will be useful in planning better strategies to manage the invasion. A process-oriented niche model of L. camara was developed using CLIMEX to estimate its potential distribution under current and future climate scenarios. The model was calibrated using data from several knowledge domains, including phenological observations and geographic distribution records. The potential distribution of lantana under historical climate exceeded the current distribution in some areas of the world, notably Africa and Asia. Under future scenarios, the climatically suitable areas for L. camara globally were projected to contract. However, some areas were identified in North Africa, Europe and Australia that may become climatically suitable under future climates. In South Africa and China, its potential distribution could expand further inland. These results can inform strategic planning by biosecurity agencies, identifying areas to target for eradication or containment. Distribution maps of risk of potential invasion can be useful tools in public awareness campaigns, especially in countries that have been identified as becoming climatically suitable for L. camara under the future climate scenarios. PMID:22536408
Degree of target utilization influences the location of movement endpoint distributions.
Slifkin, Andrew B; Eder, Jeffrey R
2017-03-01
According to dominant theories of motor control, speed and accuracy are optimized when, on the average, movement endpoints are located at the target center and when the variability of the movement endpoint distributions is matched to the width of the target (viz., Meyer, Abrams, Kornblum, Wright, & Smith, 1988). The current study tested those predictions. According to the speed-accuracy trade-off, expanding the range of variability to the amount permitted by the limits of the target boundaries allows for maximization of movement speed while centering the distribution on the target center prevents movement errors that would have occurred had the distribution been off center. Here, participants (N=20) were required to generate 100 consecutive targeted hand movements under each of 15 unique conditions: There were three movement amplitude requirements (80, 160, 320mm) and within each there were five target widths (5, 10, 20, 40, 80mm). According to the results, it was only at the smaller target widths (5, 10mm) that movement endpoint distributions were centered on the target center and the range of movement endpoint variability matched the range specified by the target boundaries. As target width increased (20, 40, 80mm), participants increasingly undershot the target center and the range of movement endpoint variability increasingly underestimated the variability permitted by the target region. The degree of target center undershooting was strongly predicted by the difference between the size of the target and the amount of movement endpoint variability, i.e., the amount of unused space in the target. The results suggest that participants have precise knowledge of their variability relative to that permitted by the target, and they use that knowledge to systematically reduce the travel distance to targets. The reduction in travel distance across the larger target widths might have resulted in greater cost savings than those associated with increases in speed. Copyright © 2017. Published by Elsevier B.V.
Iramina, Hiraku; Nakamura, Mitsuhiro; Iizuka, Yusuke; Mitsuyoshi, Takamasa; Matsuo, Yukinori; Mizowaki, Takashi; Kanno, Ikuo
2018-04-19
During therapeutic beam irradiation, an unvisualized three-dimensional (3D) target position should be estimated using an external surrogate with an estimation model. Training periods for the developed model with no additional imaging during beam irradiation were optimized using clinical data. Dual-source 4D-CBCT projection data for 20 lung cancer patients were used for validation. Each patient underwent one to three scans. The actual target positions of each scan were equally divided into two equal parts: one for the modeling and the other for the validating session. A quadratic target position estimation equation was constructed during the modeling session. Various training periods for the session-i.e., modeling periods (T M )-were employed: T M ∈ {5,10,15,25,35} [s]. First, the equation was used to estimate target positions in the validating session of the same scan (intra-scan estimations). Second, the equation was then used to estimate target positions in the validating session of another temporally different scan (inter-scan estimations). The baseline drift of the surrogate and target between scans was corrected. Various training periods for the baseline drift correction-i.e., correction periods (T C s)-were employed: T C ∈ {5,10,15; T C ≤ T M } [s]. Evaluations were conducted with and without the correction. The difference between the actual and estimated target positions was evaluated by the root-mean-square error (RMSE). The range of mean respiratory period and 3D motion amplitude of the target was 2.4-13.0 s and 2.8-34.2 mm, respectively. On intra-scan estimation, the median 3D RMSE was within 1.5-2.1 mm, supported by previous studies. On inter-scan estimation, median elapsed time between scans was 10.1 min. All T M s exhibited 75th percentile 3D RMSEs of 5.0-6.4 mm due to baseline drift of the surrogate and the target. After the correction, those for each T M s fell by 1.4-2.3 mm. The median 3D RMSE for both the 10-s T M and the T C period was 2.4 mm, which plateaued when the two training periods exceeded 10 s. A widely-applicable estimation model for the 3D target positions during beam irradiation was developed. The optimal T M and T C for the model were both 10 s, to allow for more than one respiratory cycle. UMIN000014825 . Registered: 11 August 2014.
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Swain, P. H. (Principal Investigator); Vardeman, S. B.
1981-01-01
A key input to a statistical classification algorithm, which exploits the tendency of certain ground cover classes to occur more frequently in some spatial context than in others, is a statistical characterization of the context: the context distribution. An unbiased estimator of the context distribution is discussed which, besides having the advantage of statistical unbiasedness, has the additional advantage over other estimation techniques of being amenable to an adaptive implementation in which the context distribution estimate varies according to local contextual information. Results from applying the unbiased estimator to the contextual classification of three real LANDSAT data sets are presented and contrasted with results from non-contextual classifications and from contextual classifications utilizing other context distribution estimation techniques.
Gama, Elvis; Were, Vincent; Ouma, Peter; Desai, Meghna; Niessen, Louis; Buff, Ann M; Kariuki, Simon
2016-11-21
Historically, Kenya has used various distribution models for long-lasting insecticide-treated bed nets (LLINs) with variable results in population coverage. The models presently vary widely in scale, target population and strategy. There is limited information to determine the best combination of distribution models, which will lead to sustained high coverage and are operationally efficient and cost-effective. Standardised cost information is needed in combination with programme effectiveness estimates to judge the efficiency of LLIN distribution models and options for improvement in implementing malaria control programmes. The study aims to address the information gap, estimating distribution cost and the effectiveness of different LLIN distribution models, and comparing them in an economic evaluation. Evaluation of cost and coverage will be determined for 5 different distribution models in Busia County, an area of perennial malaria transmission in western Kenya. Cost data will be collected retrospectively from health facilities, the Ministry of Health, donors and distributors. Programme-effectiveness data, defined as the number of people with access to an LLIN per 1000 population, will be collected through triangulation of data from a nationally representative, cross-sectional malaria survey, a cross-sectional survey administered to a subsample of beneficiaries in Busia County and LLIN distributors' records. Descriptive statistics and regression analysis will be used for the evaluation. A cost-effectiveness analysis will be performed from a health-systems perspective, and cost-effectiveness ratios will be calculated using bootstrapping techniques. The study has been evaluated and approved by Kenya Medical Research Institute, Scientific and Ethical Review Unit (SERU number 2997). All participants will provide written informed consent. The findings of this economic evaluation will be disseminated through peer-reviewed publications. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Singh, M. K.; Soma, A. K.; Pathak, Ramji; Singh, V.
2014-03-01
This article focuses on multiplicity distributions of shower particles and target fragments for interaction of 84 Kr 36 with NIKFI BR-2 nuclear emulsion target at kinetic energy of 1 GeV per nucleon. Experimental multiplicity distributions of shower particles, grey particles, black particles and heavily ionization particles are well described by multi-component Erlang distribution of multi-source thermal model. We have observed a linear correlation in multiplicities for the above mentioned particles or fragments. Further experimental studies have shown a saturation phenomenon in shower particle multiplicity with the increase of target fragment multiplicity.
Multiple Target Laser Designator (MTLD)
2007-03-01
Optimized Liquid Crystal Scanning Element Optimize the Nonimaging Predictive Algorithm for Target Ranging, Tracking, and Position Estimation...commercial potential. 3.0 PROGRESS THIS QUARTER 3.1 Optimization of Nonimaging Holographic Antenna for Target Tracking and Position Estimation (Task 6) In
Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2006-01-01
Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Algan, O; Ahmad, S
Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less
Van Regenmortel, Tina; Nys, Charlotte; Janssen, Colin R; Lofts, Stephen; De Schamphelaere, Karel A C
2017-08-01
Although chemical risk assessment is still mainly conducted on a substance-by-substance basis, organisms in the environment are typically exposed to mixtures of substances. Risk assessment procedures should therefore be adapted to fit these situations. Four mixture risk assessment methodologies were compared for risk estimations of mixtures of copper (Cu), zinc (Zn), and nickel (Ni). The results showed that use of the log-normal species sensitivity distribution (SSD) instead of the best-fit distribution and sampling species sensitivities independently for each metal instead of using interspecies correlations in metal sensitivity had little impact on risk estimates. Across 4 different monitoring datasets, between 0% and 52% of the target water samples were estimated to be at risk, but only between 0% and 15% of the target water samples were at risk because of the mixture of metals and not any single metal individually. When a natural baseline database was examined, it was estimated that 10% of the target water samples were at risk because of single metals or their mixtures; when the most conservative method was used (concentration addition [CA] applied directly to the SSD, i.e., CA SSD ). However, the issue of metal mixture risk at geochemical baseline concentrations became relatively small (2% of target water samples) when a theoretically more correct method was used (CA applied to individual dose response curves, i.e., CA DRC ). Finally, across the 4 monitoring datasets, the following order of conservatism for the 4 methods was shown (from most to least conservative, with ranges of median margin of safety [MoS] relative to CA SSD ): CA SSD > CA DRC (MoS = 1.17-1.25) > IA DRC (independent action (IA) applied to individual dose-response curves; MoS = 1.38-1.60) > IA SSD (MoS = 1.48-1.72). Therefore, it is suggested that these 4 methods can be used in a general tiered scheme for the risk assessment of metal mixtures in a regulatory context. In this scheme, the CA SSD method could serve as a first (conservative) tier to identify situations with likely no potential risk at all, regardless of the method used (the sum toxic unit expressed relative to the 5% hazardous concentration [SumTU HC5 ] < 1) and the IA SSD method to identify situations of potential risk, also regardless of the method used (the multisubstance potentially affected fraction of species using the IA SSD method [msPAF IA,SSD ] > 0.05). The CA DRC and IA DRC methods could be used for site-specific assessment for situations that fall in between (SumTU HC5 > 1 and msPAF IA,SSD < 0.05). Environ Toxicol Chem 2017;36:2123-2138. © 2017 SETAC. © 2017 SETAC.
Characterizing resonant component in speech: A different view of tracking fundamental frequency
NASA Astrophysics Data System (ADS)
Dong, Bin
2017-05-01
Inspired by the nonlinearity and nonstationarity and the modulations in speech, Hilbert-Huang Transform and cyclostationarity analysis are employed to investigate the speech resonance in vowel in sequence. Cyclostationarity analysis is not directly manipulated on the target vowel, but on its intrinsic mode functions one by one. Thanks to the equivalence between the fundamental frequency in speech and the cyclic frequency in cyclostationarity analysis, the modulation intensity distributions of the intrinsic mode functions provide much information for the estimation of the fundamental frequency. To highlight the relationship between frequency and time, the pseudo-Hilbert spectrum is proposed to replace the Hilbert spectrum here. After contrasting the pseudo-Hilbert spectra of and the modulation intensity distributions of the intrinsic mode functions, it finds that there is usually one intrinsic mode function which works as the fundamental component of the vowel. Furthermore, the fundamental frequency of the vowel can be determined by tracing the pseudo-Hilbert spectrum of its fundamental component along the time axis. The later method is more robust to estimate the fundamental frequency, when meeting nonlinear components. Two vowels [a] and [i], picked up from a speech database FAU Aibo Emotion Corpus, are applied to validate the above findings.
Comparison of treatment plans: a retrospective study by the method of radiobiological evaluation
NASA Astrophysics Data System (ADS)
Puzhakkal, Niyas; Kallikuzhiyil Kochunny, Abdullah; Manthala Padannayil, Noufal; Singh, Navin; Elavan Chalil, Jumanath; Kulangarakath Umer, Jamshad
2016-09-01
There are many situations in radiotherapy where multiple treatment plans need to be compared for selection of an optimal plan. In this study we performed the radiobiological method of plan evaluation to verify the treatment plan comparison procedure of our clinical practice. We estimated and correlated various radiobiological dose indices with physical dose metrics for a total of 30 patients representing typical cases of head and neck, prostate and brain tumors. Three sets of plans along with a clinically approved plan (final plan) treated by either Intensity Modulated Radiation Therapy (IMRT) or Rapid Arc (RA) techniques were considered. The study yielded improved target coverage for final plans, however, no appreciable differences in doses and the complication probabilities of organs at risk were noticed. Even though all four plans showed adequate dose distributions, from dosimetric point of view, the final plan had more acceptable dose distribution. The estimated biological outcome and dose volume histogram data showed least differences between plans for IMRT when compared to RA. Our retrospective study based on 120 plans, validated the radiobiological method of plan evaluation. The tumor cure or normal tissue complication probabilities were found to be correlated with the corresponding physical dose indices.
Automated mapping of explosives particles in composition C-4 fingerprints.
Verkouteren, Jennifer R; Coleman, Jessica L; Cho, Inho
2010-03-01
A method is described to perform automated mapping of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) particles in C-4 fingerprints. The method employs polarized light microscopy and image analysis to map the entire fingerprint and the distribution of RDX particles. This method can be used to evaluate a large number of fingerprints to aid in the development of threat libraries that can be used to determine performance requirements of explosive trace detectors. A series of 50 C-4 fingerprints were characterized, and results show that the number of particles varies significantly from print to print, and within a print. The particle size distributions can be used to estimate the mass of RDX in the fingerprint. These estimates were found to be within +/-26% relative of the results obtained from dissolution gas chromatography/micro-electron capture detection for four of six prints, which is quite encouraging for a particle counting approach. By evaluating the average mass and frequency of particles with respect to size for this series of fingerprints, we conclude that particles 10-20 microm in diameter could be targeted to improve detection of traces of C-4 explosives.
Game theoretic sensor management for target tracking
NASA Astrophysics Data System (ADS)
Shen, Dan; Chen, Genshe; Blasch, Erik; Pham, Khanh; Douville, Philip; Yang, Chun; Kadar, Ivan
2010-04-01
This paper develops and evaluates a game-theoretic approach to distributed sensor-network management for target tracking via sensor-based negotiation. We present a distributed sensor-based negotiation game model for sensor management for multi-sensor multi-target tacking situations. In our negotiation framework, each negotiation agent represents a sensor and each sensor maximizes their utility using a game approach. The greediness of each sensor is limited by the fact that the sensor-to-target assignment efficiency will decrease if too many sensor resources are assigned to a same target. It is similar to the market concept in real world, such as agreements between buyers and sellers in an auction market. Sensors are willing to switch targets so that they can obtain their highest utility and the most efficient way of applying their resources. Our sub-game perfect equilibrium-based negotiation strategies dynamically and distributedly assign sensors to targets. Numerical simulations are performed to demonstrate our sensor-based negotiation approach for distributed sensor management.
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Anastasio, Mark A.
2017-12-01
The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.
Heath, Matthew; Manzone, Joseph; Khan, Michaela; Davarpanah Jazi, Shirin
2017-10-01
A number of studies have reported that grasps and manual estimations of differently sized target objects (e.g., 20 through 70 mm) violate and adhere to Weber's law, respectively (e.g., Ganel et al. 2008a, Curr Biol 18:R599-R601)-a result interpreted as evidence that separate visual codes support actions (i.e., absolute) and perceptions (i.e., relative). More recent work employing a broader range of target objects (i.e., 5 through 120 mm) has laid question to this claim and proposed that grasps for 'larger' target objects (i.e., >20 mm) elicit an inverse relationship to Weber's law and that manual estimations for target objects greater than 40 mm violate the law (Bruno et al. 2016, Neuropsychologia 91:327-334). In accounting for this finding, it was proposed that biomechanical limits in aperture shaping preclude the application of Weber's law for larger target objects. It is, however, important to note that the work supporting a biomechanical account may have employed target objects that approached -or were beyond-some participants' maximal aperture separation. The present investigation examined whether grasps and manual estimations differentially adhere to Weber's law across a continuous range of functionally 'graspable' target objects (i.e., 10,…,80% of participant-specific maximal aperture separation). In addition, we employed a method of adjustment task to examine whether manual estimation provides a valid proxy for a traditional measure of perceptual judgment. Manual estimation and method of adjustment tasks demonstrated adherence to Weber's law across the continuous range of target objects used here, whereas grasps violated the law. Thus, results evince that grasps and manual estimations of graspable target objects are, respectively, mediated via absolute and relative visual information.
Petterson, S; Roser, D; Deere, D
2015-09-01
It is proposed that the next revision of the Australian Drinking Water Guidelines will include 'health-based targets', where the required level of potable water treatment quantitatively relates to the magnitude of source water pathogen concentrations. To quantify likely Cryptosporidium concentrations in southern Australian surface source waters, the databases for 25 metropolitan water supplies with good historical records, representing a range of catchment sizes, land use and climatic regions were mined. The distributions and uncertainty intervals for Cryptosporidium concentrations were characterized for each site. Then, treatment targets were quantified applying the framework recommended in the World Health Organization Guidelines for Drinking-Water Quality 2011. Based on total oocyst concentrations, and not factoring in genotype or physiological state information as it relates to infectivity for humans, the best estimates of the required level of treatment, expressed as log10 reduction values, ranged among the study sites from 1.4 to 6.1 log10. Challenges associated with relying on historical monitoring data for defining drinking water treatment requirements were identified. In addition, the importance of quantitative microbial risk assessment input assumptions on the quantified treatment targets was investigated, highlighting the need for selection of locally appropriate values.
Enhanced electron emission from coated metal targets: Effect of surface thickness on performance
NASA Astrophysics Data System (ADS)
Madas, Saibabu; Mishra, S. K.; Upadhyay Kahaly, Mousumi
2018-03-01
In this work, we establish an analytical formalism to address the temperature dependent electron emission from a metallic target with thin coating, operating at a finite temperature. Taking into account three dimensional parabolic energy dispersion for the target (base) material and suitable thickness dependent energy dispersion for the coating layer, Fermi Dirac statistics of electron energy distribution and Fowler's mechanism of the electron emission, we discuss the dependence of the emission flux on the physical properties such as the Fermi level, work function, thickness of the coating material, and operating temperature. Our systematic estimation of how the thickness of coating affects the emission current demonstrates superior emission characteristics for thin coating layer at high temperature (above 1000 K), whereas in low temperature regime, a better response is expected from thicker coating layer. This underlying fundamental behavior appears to be essentially identical for all configurations when work function of the coating layer is lower than that of the bulk target work function. The analysis and predictions could be useful in designing new coated materials with suitable thickness for applications in the field of thin film devices and field emitters.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, S; Komori, M; Toshito, T
Purpose: Since proton therapy has the ability to selectively deliver a dose to a target tumor, the dose distribution should be accurately measured. A precise and efficient method to evaluate the dose distribution is desired. We found that luminescence was emitted from water during proton irradiation and thought this phenomenon could be used for estimating the dose distribution. Methods: For this purpose, we placed water phantoms set on a table with a spot-scanning proton-therapy system, and luminescence images of these phantoms were measured with a high-sensitivity cooled charge coupled device (CCD) camera during proton-beam irradiation. We also conducted the imagingmore » of phantoms of pure-water, fluorescein solution and acrylic block. We made three dimensional images from the projection data. Results: The luminescence images of water phantoms during the proton-beam irradiations showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. The image of the pure-water phantom also showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had 14.5% shorter proton range than that of water; the proton range in the acrylic phantom was relatively matched with the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 sec. Three dimensional images were successfully obtained which have more quantitative information. Conclusion: Luminescence imaging during proton-beam irradiation has the potential to be a new method for range estimations in proton therapy.« less
2007-06-01
Chin Khoon Quek. “Vision Based Control and Target Range Estimation for Small Unmanned Aerial Vehicle.” Master’s Thesis, Naval Postgraduate School...December 2005. [6] Kwee Chye Yap. “Incorporating Target Mensuration System for Target Motion Estimation Along a Road Using Asynchronous Filter
Adaptive early detection ML/PDA estimator for LO targets with EO sensors
NASA Astrophysics Data System (ADS)
Chummun, Muhammad R.; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov
2000-07-01
The batch Maximum Likelihood Estimator, combined with Probabilistic Data (ML-PDA), has been shown to be effective in acquiring low observable (LO) - low SNR - non-maneuvering targets in the presence of heavy clutter. The use of signal strength or amplitude information (AI) in the ML-PDA estimator with AI in a sliding-window fashion, to detect high- speed targets in heavy clutter using electro-optical (EO) sensors. The initial time and the length of the sliding-window are adjusted adaptively according to the information content of the received measurements. A track validation scheme via hypothesis testing is developed to confirm the estimated track, that is, the presence of a target, in each window. The sliding-window ML-PDA approach, together with track validation, enables early detection by rejecting noninformative scans, target reacquisition in case of temporary target disappearance and the handling of targets with speeds evolving over time. The proposed algorithm is shown to detect the target, which is hidden in as many as 600 false alarms per scan, 10 frames earlier than the Multiple Hypothesis Tracking (MHT) algorithm.
Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian
2016-03-16
To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Carrer, Dominique; Roujean, Jean-Louis; Hautecoeur, Olivier; Elias, Thierry
2010-05-01
This paper presents an innovative method for obtaining a daily estimate of a quality-controlled aerosol optical thickness (AOT) of a vertical column of the atmosphere over the continents. Because properties of land surface are more stationary than the atmosphere, the temporal dimension is exploited for simultaneous retrieval of the surface and aerosol bidirectional reflectance distribution function (BRDF) coming from a kernel-driven reflectance model. Off-zenith geometry of illumination enhances the forward scattering peak of the aerosol, which improves the retrieval of AOT from the aerosol BRDF. The solution is obtained through an unconstrained linear inversion procedure and perpetuated in time using a Kalman filter. On the basis of numerical experiments using the 6S atmospheric code, the validity of the BRDF model is demonstrated. The application is carried out with data from the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) instrument on board the geostationary Meteosat Second Generation (MSG) satellite from June 2005 to August 2007 for midlatitude regions and from March 2006 to June 2006 over desert sites. The satellite-derived SEVIRI AOT compares favorably with Aerosol Robotic Network (AERONET) measurements for a number of contrasted stations and also similar Moderate Resolution Imaging Spectroradiometer (MODIS) products, within 20% of relative accuracy. The method appears competitive for tracking anthropogenic aerosol emissions in the troposphere and shows a potential for the challenging estimate of dust events over bright targets. Moreover, a high-frequency distribution of AOT provides hints as to the variability of pollutants according to town density and, potentially, motor vehicle traffic. The outcomes of the present study are expected to promote a monitoring of the global distributions of natural and anthropogenic sources and sinks of aerosol, which are receiving increased attention because of their climatic implications.
NASA Astrophysics Data System (ADS)
Morales, Roberto; Barriga-Carrasco, Manuel D.; Casas, David
2017-04-01
The instantaneous charge state of uranium ions traveling through a fully ionized hydrogen plasma has been theoretically studied and compared with one of the first energy loss experiments in plasmas, carried out at GSI-Darmstadt by Hoffmann et al. in the 1990s. For this purpose, two different methods to estimate the instantaneous charge state of the projectile have been employed: (1) rate equations using ionization and recombination cross sections and (2) equilibrium charge state formulas for plasmas. Also, the equilibrium charge state has been obtained using these ionization and recombination cross sections and compared with the former equilibrium formulas. The equilibrium charge state of projectiles in plasmas is not always reached, and it depends mainly on the projectile velocity and the plasma density. Therefore, a non-equilibrium or an instantaneous description of the projectile charge is necessary. The charge state of projectile ions cannot be measured, except after exiting the target, and experimental data remain very scarce. Thus, the validity of our charge state model is checked by comparing the theoretical predictions with an energy loss experiment, as the energy loss has a generally quadratic dependence on the projectile charge state. The dielectric formalism has been used to calculate the plasma stopping power including the Brandt-Kitagawa (BK) model to describe the charge distribution of the projectile. In this charge distribution, the instantaneous number of bound electrons instead of the equilibrium number has been taken into account. Comparing our theoretical predictions with experiments, it is shown the necessity of including the instantaneous charge state and the BK charge distribution for a correct energy loss estimation. The results also show that the initial charge state has a strong influence in order to estimate the energy loss of the uranium ions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabhouti, H; Sanli, E; Cebe, M
Purpose: Brain stereotactic radiosurgery involves the use of precisely directed, single session radiation to create a desired radiobiologic response within the brain target with acceptable minimal effects on surrounding structures or tissues. In this study, the dosimetric comparison of Truebeam 2.0 and Cyberknife M6 treatment plans were made. Methods: For Truebeam 2.0 machine, treatment planning were done using 2 full arc VMAT technique with 6 FFF beam on the CT scan of Randophantom simulating the treatment of sterotactic treatments for one brain metastasis. The dose distribution were calculated using Eclipse treatment planning system with Acuros XB algorithm. The treatment planningmore » of the same target were also done for Cyberknife M6 machine with Multiplan treatment planning system using Monte Carlo algorithm. Using the same film batch, the net OD to dose calibration curve was obtained using both machine by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution were measured using EBT3 film dosimeter. The measured and calculated doses were compared. Results: The dose distribution in the target and 2 cm beyond the target edge were calculated on TPSs and measured using EBT3 film. For cyberknife plans, the gamma analysis passing rates between measured and calculated dose distributions were 99.2% and 96.7% for target and peripheral region of target respectively. For Truebeam plans, the gamma analysis passing rates were 99.1% and 95.5% for target and peripheral region of target respectively. Conclusion: Although, target dose distribution calculated accurately by Acuros XB and Monte Carlo algorithms, Monte carlo calculation algorithm predicts dose distribution around the peripheral region of target more accurately than Acuros algorithm.« less
Modeling population exposures to silver nanoparticles present in consumer products
NASA Astrophysics Data System (ADS)
Royce, Steven G.; Mukherjee, Dwaipayan; Cai, Ting; Xu, Shu S.; Alexander, Jocelyn A.; Mi, Zhongyuan; Calderon, Leonardo; Mainelis, Gediminas; Lee, KiBum; Lioy, Paul J.; Tetley, Teresa D.; Chung, Kian Fan; Zhang, Junfeng; Georgopoulos, Panos G.
2014-11-01
Exposures of the general population to manufactured nanoparticles (MNPs) are expected to keep rising due to increasing use of MNPs in common consumer products (PEN 2014). The present study focuses on characterizing ambient and indoor population exposures to silver MNPs (nAg). For situations where detailed, case-specific exposure-related data are not available, as in the present study, a novel tiered modeling system, Prioritization/Ranking of Toxic Exposures with GIS (geographic information system) Extension (PRoTEGE), has been developed: it employs a product life cycle analysis (LCA) approach coupled with basic human life stage analysis (LSA) to characterize potential exposures to chemicals of current and emerging concern. The PRoTEGE system has been implemented for ambient and indoor environments, utilizing available MNP production, usage, and properties databases, along with laboratory measurements of potential personal exposures from consumer spray products containing nAg. Modeling of environmental and microenvironmental levels of MNPs employs probabilistic material flow analysis combined with product LCA to account for releases during manufacturing, transport, usage, disposal, etc. Human exposure and dose characterization further employ screening microenvironmental modeling and intake fraction methods combined with LSA for potentially exposed populations, to assess differences associated with gender, age, and demographics. Population distributions of intakes, estimated using the PRoTEGE framework, are consistent with published individual-based intake estimates, demonstrating that PRoTEGE is capable of capturing realistic exposure scenarios for the US population. Distributions of intakes are also used to calculate biologically relevant population distributions of uptakes and target tissue doses through human airway dosimetry modeling that takes into account product MNP size distributions and age-relevant physiological parameters.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
A regressive methodology for estimating missing data in rainfall daily time series
NASA Astrophysics Data System (ADS)
Barca, E.; Passarella, G.
2009-04-01
The "presence" of gaps in environmental data time series represents a very common, but extremely critical problem, since it can produce biased results (Rubin, 1976). Missing data plagues almost all surveys. The problem is how to deal with missing data once it has been deemed impossible to recover the actual missing values. Apart from the amount of missing data, another issue which plays an important role in the choice of any recovery approach is the evaluation of "missingness" mechanisms. When data missing is conditioned by some other variable observed in the data set (Schafer, 1997) the mechanism is called MAR (Missing at Random). Otherwise, when the missingness mechanism depends on the actual value of the missing data, it is called NCAR (Not Missing at Random). This last is the most difficult condition to model. In the last decade interest arose in the estimation of missing data by using regression (single imputation). More recently multiple imputation has become also available, which returns a distribution of estimated values (Scheffer, 2002). In this paper an automatic methodology for estimating missing data is presented. In practice, given a gauging station affected by missing data (target station), the methodology checks the randomness of the missing data and classifies the "similarity" between the target station and the other gauging stations spread over the study area. Among different methods useful for defining the similarity degree, whose effectiveness strongly depends on the data distribution, the Spearman correlation coefficient was chosen. Once defined the similarity matrix, a suitable, nonparametric, univariate, and regressive method was applied in order to estimate missing data in the target station: the Theil method (Theil, 1950). Even though the methodology revealed to be rather reliable an improvement of the missing data estimation can be achieved by a generalization. A first possible improvement consists in extending the univariate technique to the multivariate approach. Another approach follows the paradigm of the "multiple imputation" (Rubin, 1987; Rubin, 1988), which consists in using a set of "similar stations" instead than the most similar. This way, a sort of estimation range can be determined allowing the introduction of uncertainty. Finally, time series can be grouped on the basis of monthly rainfall rates defining classes of wetness (i.e.: dry, moderately rainy and rainy), in order to achieve the estimation using homogeneous data subsets. We expect that integrating the methodology with these enhancements will certainly improve its reliability. The methodology was applied to the daily rainfall time series data registered in the Candelaro River Basin (Apulia - South Italy) from 1970 to 2001. REFERENCES D.B., Rubin, 1976. Inference and Missing Data. Biometrika 63 581-592 D.B. Rubin, 1987. Multiple Imputation for Nonresponce in Surveys, New York: John Wiley & Sons, Inc. D.B. Rubin, 1988. An overview of multiple imputation. In Survey Research Section, pp. 79-84, American Statistical Association, 1988. J.L., Schafer, 1997. Analysis of Incomplete Multivariate Data, Chapman & Hall. J., Scheffer, 2002. Dealing with Missing Data. Res. Lett. Inf. Math. Sci. 3, 153-160. Available online at http://www.massey.ac.nz/~wwiims/research/letters/ H. Theil, 1950. A rank-invariant method of linear and polynomial regression analysis. Indicationes Mathematicae, 12, pp.85-91.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
On moments of the multiplicity events of slow target fragments in relativistic Sulfur-ion collisions
NASA Astrophysics Data System (ADS)
Abdelsalam, A.; Kamel, S.; Rashed, N.; Sabry, N.
2014-07-01
A detailed study on the multiplicity characteristics of the slow target fragments emitted in relativistic heavy-ion collisions has been carried out at ELab = 3.7A and 200A GeV using 32S projectile. The beam energy dependence of the black particles produced in the full phase space of 32S-emulsion (32S-Em) interactions on the target size in terms of their moments (mean, variance, skewness and kurtosis) is investigated. The various order moments of target fragments emitted in the interactions of 32S beams with the heavy (AgBr) target nuclei are estimated in the forward (FHS) and backward (BHS) hemispheres. The investigated values of ratio of variance to mean at both energies show that the multiplicity distributions (MDs) are not Poissonian and the strongly correlated emission of target fragments are in the forward directions. The degree of anisotropic fragment emission and nature of correlation among the emitted fragments are investigated. The energy dependence of entropy is examined in both hemispheres. The entropy values normalized to average multiplicity are found to be energy independent. Scaling of MD of black particles produced in these interactions has been studied to verify the validity of scaling hypothesis via two scaling (Koba-Nielsen-Olesen (KNO)-scaling and Hegyi-scaling) functions. A simplified universal function has been used in each scaling to display the experimental data.
Vassall, Anna; Pickles, Michael; Chandrashekar, Sudhashree; Boily, Marie-Claude; Shetty, Govindraj; Guinness, Lorna; Lowndes, Catherine M; Bradley, Janet; Moses, Stephen; Alary, Michel; Vickerman, Peter
2014-09-01
Avahan is a large-scale, HIV preventive intervention, targeting high-risk populations in south India. We assessed the cost-effectiveness of Avahan to inform global and national funding institutions who are considering investing in worldwide HIV prevention in concentrated epidemics. We estimated cost effectiveness from a programme perspective in 22 districts in four high-prevalence states. We used the UNAIDS Costing Guidelines for HIV Prevention Strategies as the basis for our costing method, and calculated effect estimates using a dynamic transmission model of HIV and sexually transmitted disease transmission that was parameterised and fitted to locally observed behavioural and prevalence trends. We calculated incremental cost-effective ratios (ICERs), comparing the incremental cost of Avahan per disability-adjusted life-year (DALY) averted versus a no-Avahan counterfactual scenario. We also estimated incremental cost per HIV infection averted and incremental cost per person reached. Avahan reached roughly 150 000 high-risk individuals between 2004 and 2008 in the 22 districts studied, at a mean cost per person reached of US$327 during the 4 years. This reach resulted in an estimated 61 000 HIV infections averted, with roughly 11 000 HIV infections averted in the general population, at a mean incremental cost per HIV infection averted of $785 (SD 166). We estimate that roughly 1 million DALYs were averted across the 22 districts, at a mean incremental cost per DALY averted of $46 (SD 10). Future antiretroviral treatment (ART) cost savings during the lifetime of the cohort exposed to HIV prevention were estimated to be more than $77 million (compared with the slightly more than $50 million spent on Avahan in the 22 districts during the 4 years of the study). This study provides evidence that the investment in targeted HIV prevention programmes in south India has been cost effective, and is likely to be cost saving if a commitment is made to provide ART to all that can benefit from it. Policy makers should consider funding and sustaining large-scale targeted HIV prevention programmes in India and beyond. Bill & Melinda Gates Foundation. Copyright © 2014 Vassall et al. Open Access article distributed under the terms of CC BY-NC-ND. Published by .. All rights reserved.
Observations of hot stars and eclipsing binaries with FRESIP
NASA Technical Reports Server (NTRS)
Gies, Douglas R.
1994-01-01
The FRESIP project offers an unprecedented opportunity to study pulsations in hot stars (which vary on time scales of a day) over a several year period. The photometric data will determine what frequencies are present, how or if the amplitudes change with time, and whether there is a connection between pulsation and mass loss episodes. It would initiate a new field of asteroseismology studies of hot star interiors. A search should be made for selected hot stars for inclusion in the list of project targets. Many of the primary solar mass targets will be eclipsing binaries, and I present estimates of their frequency and typical light curves. The photometric data combined with follow up spectroscopy and interferometric observations will provide fundamental data on these stars. The data will provide definitive information on the mass ratio distribution of solar-mass binaries (including the incidence of brown dwarf companions) and on the incidence of planets in binary systems.
Audio Tracking in Noisy Environments by Acoustic Map and Spectral Signature.
Crocco, Marco; Martelli, Samuele; Trucco, Andrea; Zunino, Andrea; Murino, Vittorio
2018-05-01
A novel method is proposed for generic target tracking by audio measurements from a microphone array. To cope with noisy environments characterized by persistent and high energy interfering sources, a classification map (CM) based on spectral signatures is calculated by means of a machine learning algorithm. Next, the CM is combined with the acoustic map, describing the spatial distribution of sound energy, in order to obtain a cleaned joint map in which contributions from the disturbing sources are removed. A likelihood function is derived from this map and fed to a particle filter yielding the target location estimation on the acoustic image. The method is tested on two real environments, addressing both speaker and vehicle tracking. The comparison with a couple of trackers, relying on the acoustic map only, shows a sharp improvement in performance, paving the way to the application of audio tracking in real challenging environments.
Fluorescent carbon and graphene oxide nanoparticles synthesized by the laser ablation in liquid
NASA Astrophysics Data System (ADS)
Małolepszy, A.; Błonski, S.; Chrzanowska-Giżyńska, J.; Wojasiński, M.; Płocinski, T.; Stobinski, L.; Szymanski, Z.
2018-04-01
The results of synthesis of the fluorescent carbon dots (CDots) from graphite target and reduced graphene oxide (rGO) nanoparticles performed by the nanosecond laser ablation in polyethylene glycol 200 (PEG200) are shown. Two-step laser irradiation (first graphite target, next achieved suspension) revealed a very effective production of CDots. However, the ablation in PEG appeared to be effective with 1064 nm laser pulse in contrast to the ablation with 355 nm laser pulse. In the case of rGO nanoparticles similar laser irradiation procedure was less efficient. In both cases, received nanoparticles exhibited strong, broadband photoluminescence with a maximum dependent on the excitation wavelength. The size distribution for obtained CDots was evaluated using the DLS technique and HRTEM images. The results from both methods show quite good agreement in nanoparticle size estimation although the DLS method slightly overestimates nanoparticle's diameter.
Esparza, José; Chang, Marie-Louise; Widdus, Roy; Madrid, Yvette; Walker, Neff; Ghys, Peter D
2003-05-16
Once an effective HIV vaccine is discovered, a major challenge will be to ensure its world wide access. A preventive vaccine with low or moderate efficacy (30-50%) could be a valuable prevention tool, especially if targeted to populations at higher risk of HIV infection. High efficacy vaccines (80-90%) could be used in larger segments of the population. Estimated "needs" for future HIV vaccines were based on anticipated policies regarding target populations. Estimated "needs" were adjusted for "accessibility" and "acceptability" in the target populations, to arrive at an estimate of "probable uptake", i.e. courses of vaccine likely to be delivered. With a high efficacy vaccine, global needs are in the order of 690 million full immunization courses, targeting 22 and 69%, respectively, of the 15-49 years old, world wide and in sub-Saharan Africa, respectively. With a low/moderate efficacy vaccine targeted to populations at higher risk of HIV infection, the global needs were estimated to be 260 million full immunization courses, targeting 8 and 41%, respectively, of the world and sub-Saharan African population aged 15-49 years. The current estimate of probable uptake for hypothetical HIV vaccines, using existing health services and delivery systems, was 38% of the estimated need for a high efficacy vaccine, and 19% for a low/moderate efficacy vaccine. Bridging the gap between the estimated needs and the probable uptake for HIV vaccines will represent a major public health challenge for the future. The potential advantages and disadvantages of targeted versus universal vaccination will have to be considered.
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
Waveform Optimization for Target Estimation by Cognitive Radar with Multiple Antennas.
Yao, Yu; Zhao, Junhui; Wu, Lenan
2018-05-29
A new scheme based on Kalman filtering to optimize the waveforms of an adaptive multi-antenna radar system for target impulse response (TIR) estimation is presented. This work aims to improve the performance of TIR estimation by making use of the temporal correlation between successive received signals, and minimize the mean square error (MSE) of TIR estimation. The waveform design approach is based upon constant learning from the target feature at the receiver. Under the multiple antennas scenario, a dynamic feedback loop control system is established to real-time monitor the change in the target features extracted form received signals. The transmitter adapts its transmitted waveform to suit the time-invariant environment. Finally, the simulation results show that, as compared with the waveform design method based on the MAP criterion, the proposed waveform design algorithm is able to improve the performance of TIR estimation for extended targets with multiple iterations, and has a relatively lower level of complexity.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Quantitative risk stratification in Markov chains with limiting conditional distributions.
Chan, David C; Pollett, Philip K; Weinstein, Milton C
2009-01-01
Many clinical decisions require patient risk stratification. The authors introduce the concept of limiting conditional distributions, which describe the equilibrium proportion of surviving patients occupying each disease state in a Markov chain with death. Such distributions can quantitatively describe risk stratification. The authors first establish conditions for the existence of a positive limiting conditional distribution in a general Markov chain and describe a framework for risk stratification using the limiting conditional distribution. They then apply their framework to a clinical example of a treatment indicated for high-risk patients, first to infer the risk of patients selected for treatment in clinical trials and then to predict the outcomes of expanding treatment to other populations of risk. For the general chain, a positive limiting conditional distribution exists only if patients in the earliest state have the lowest combined risk of progression or death. The authors show that in their general framework, outcomes and population risk are interchangeable. For the clinical example, they estimate that previous clinical trials have selected the upper quintile of patient risk for this treatment, but they also show that expanded treatment would weakly dominate this degree of targeted treatment, and universal treatment may be cost-effective. Limiting conditional distributions exist in most Markov models of progressive diseases and are well suited to represent risk stratification quantitatively. This framework can characterize patient risk in clinical trials and predict outcomes for other populations of risk.
Estimation of d- 2 H Breakup Neutron Energy Distributions From d- 3 He
Hoop, B.; Grimes, S. M.; Drosg, M.
2017-06-19
A method is described to estimate deuteron-on-deuteron breakup neutron distributions at 0° using deuterium bombardment of 3He. Break-up neutron distributions are modeled with the product of a Fermi-Dirac distribution and a cumulative logistic distribution function. Four measured break-up neutron distributions from 6.15- to 12.0-MeV deuterons on 3He are compared with thirteen measured distributions from 6.83- to 11.03-MeV deuterons on deuterium. Model pararmeters that describe d -3He neutron distributions are used to estimate neutron distributions from 6- to 12-MeV deuterons on deuterium.
Effect of retransmission and retrodiction on estimation and fusion in long-haul sensor networks
Liu, Qiang; Wang, Xin; Rao, Nageswara S. V.; ...
2016-01-01
In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as target tracking. In this work, we study the scenario where sensors take measurements of one or more dynamic targets and send state estimates of the targets to a fusion center via satellite links. The severe loss and delay inherent over the satellite channels reduce the number of estimates successfully arriving at the fusion center, thereby limiting the potential fusion gain and resulting in suboptimal accuracy performance of the fused estimates. In addition, the errors in target-sensor data association can alsomore » degrade the estimation performance. To mitigate the effect of imperfect communications on state estimation and fusion, we consider retransmission and retrodiction. The system adopts certain retransmission-based transport protocols so that lost messages can be recovered over time. Besides, retrodiction/smoothing techniques are applied so that the chances of incurring excess delay due to retransmission are greatly reduced. We analyze the extent to which retransmission and retrodiction can improve the performance of delay-sensitive target tracking tasks under variable communication loss and delay conditions. Lastly, simulation results of a ballistic target tracking application are shown in the end to demonstrate the validity of our analysis.« less
Estimation of rates-across-sites distributions in phylogenetic substitution models.
Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J
2003-10-01
Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.
NASA Astrophysics Data System (ADS)
Lacey, F.; Marais, E. A.; Wiedinmyer, C.; Coffey, E.; Pfotenhauer, D.; Henze, D. K.; Evans, M. J.; Hannigan, M.; Morris, E.; Davila, Y.; Mesenbring, E. C.
2017-12-01
Population in Africa is currently projected to double by 2050, which will have significant impacts on anthropogenic emissions and in turn the ambient air quality, especially near population centers. Recent research has also shown that the emissions factors used for global inventories are misrepresented when compared to field measurements in Africa, which leads to inaccuracies in the magnitude and spatial distribution of emissions throughout the continent. As the population in Africa increases, the combination of anthropogenic and biogenic emissions in many regions will lead to changes in atmospheric pollutant concentrations, including particulate matter (PM2.5) and ozone. Combining updated emissions estimates created using measured emissions factors reported from field studies in Africa with the Community Earth System Model (CESM2) improves predictions of the present day ambient air quality; validated based on available observations from field measurements and satellite data. We use these tools to quantify the impacts of anthropogenic emissions on both climate and human health, shown here as estimated premature deaths from chronic exposure to pollutants. Sensitivities derived from model source attribution calculations using the GEOS-Chem adjoint model are then used to examine the impacts of changes in population distribution and shifts in technology moving to the mid-21st century. With these results, we are able to identify efficient mitigation pathways that target specific regions and anthropogenic activities. These targeted control measures include shifts from traditional to modern cooking technologies, as well as other sector-specific interventions that represent feasible adoptions in Africa over the next several decades. This work provides a potential roadmap towards improved air quality to both government and non-governmental organizations as Africa transitions through this period of rapid growth.
Solo dwarfs I: survey introduction and first results for the Sagittarius dwarf irregular galaxy
NASA Astrophysics Data System (ADS)
Higgs, C. R.; McConnachie, A. W.; Irwin, M.; Bate, N. F.; Lewis, G. F.; Walker, M. G.; Côté, P.; Venn, K.; Battaglia, G.
2016-05-01
We introduce the Solitary Local dwarfs survey (Solo), a wide-field photometric study targeting every isolated dwarf galaxy within 3 Mpc of the Milky Way. Solo is based on (u)gi multiband imaging from Canada-France-Hawaii Telescope/MegaCam for northern targets, and Magellan/Megacam for southern targets. All galaxies fainter than MV ≃ -18 situated beyond the nominal virial radius of the Milky Way and M31 (≳300 kpc) are included in this volume-limited sample, for a total of 42 targets. In addition to reviewing the survey goals and strategy, we present results for the Sagittarius dwarf irregular galaxy (Sag DIG), one of the most isolated, low-mass galaxies, located at the edge of the Local Group. We analyse its resolved stellar populations and their spatial distributions. We provide updated estimates of its central surface brightness and integrated luminosity, and trace its surface brightness profile to a level fainter than 30 mag arcsec-2. Sag DIG is well described by a highly elliptical (disc-like) system following a single component Sérsic model. However, a low-level distortion is present at the outer edges of the galaxy that, were Sag DIG not so isolated, would likely be attributed to some kind of previous tidal interaction. Further, we find evidence of an extremely low level, extended distribution of stars beyond ˜5 arcmin (>1.5 kpc) that suggests Sag DIG may be embedded in a very low-density stellar halo. We compare the stellar and H I structures of Sag DIG, and discuss results for this galaxy in relation to other isolated, dwarf irregular galaxies in the Local Group.
Comparison of Monte Carlo and analytical dose computations for intensity modulated proton therapy
NASA Astrophysics Data System (ADS)
Yepes, Pablo; Adair, Antony; Grosshans, David; Mirkovic, Dragan; Poenisch, Falk; Titt, Uwe; Wang, Qianxia; Mohan, Radhe
2018-02-01
To evaluate the effect of approximations in clinical analytical calculations performed by a treatment planning system (TPS) on dosimetric indices in intensity modulated proton therapy. TPS calculated dose distributions were compared with dose distributions as estimated by Monte Carlo (MC) simulations, calculated with the fast dose calculator (FDC) a system previously benchmarked to full MC. This study analyzed a total of 525 patients for four treatment sites (brain, head-and-neck, thorax and prostate). Dosimetric indices (D02, D05, D20, D50, D95, D98, EUD and Mean Dose) and a gamma-index analysis were utilized to evaluate the differences. The gamma-index passing rates for a 3%/3 mm criterion for voxels with a dose larger than 10% of the maximum dose had a median larger than 98% for all sites. The median difference for all dosimetric indices for target volumes was less than 2% for all cases. However, differences for target volumes as large as 10% were found for 2% of the thoracic patients. For organs at risk (OARs), the median absolute dose difference was smaller than 2 Gy for all indices and cohorts. However, absolute dose differences as large as 10 Gy were found for some small volume organs in brain and head-and-neck patients. This analysis concludes that for a fraction of the patients studied, TPS may overestimate the dose in the target by as much as 10%, while for some OARs the dose could be underestimated by as much as 10 Gy. Monte Carlo dose calculations may be needed to ensure more accurate dose computations to improve target coverage and sparing of OARs in proton therapy.
Estimating sales and sales market share from sales rank data for consumer appliances
NASA Astrophysics Data System (ADS)
Touzani, Samir; Van Buskirk, Robert
2016-06-01
Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.
Shanechi, Maryam M.; Williams, Ziv M.; Wornell, Gregory W.; Hu, Rollin C.; Powers, Marissa; Brown, Emery N.
2013-01-01
Real-time brain-machine interfaces (BMI) have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system. PMID:23593130
Estimating the Turn-around Radii of Six Isolated Galaxy Groups in the Local Universe
NASA Astrophysics Data System (ADS)
Lee, Jounghun
2018-03-01
Estimates of the turn-around radii of six isolated galaxy groups in the nearby universe are presented. From the Tenth Data Release of the Sloan Digital Sky Survey, we first select those isolated galaxy groups at redshifts z ≤ 0.05 in the mass range [0.3–1] × {10}14 {h}-1 {M}ȯ whose nearest-neighbor groups are located at distances larger than 15 times their virial radii. Then, we search for a gravitationally interacting web-like structure around each isolated group, which appears as an inclined streak pattern in the anisotropic spatial distribution of the neighboring field galaxies. Out of 59 isolated groups, only seven are found to possess such web-like structures in their neighbor zones, but one of them turns out to be NGC 5353/4, whose turn-around radius was already measured in a previous work and was thus excluded from our analysis. Applying the Turn-around Radius Estimator algorithm devised by Lee et al. to the identified web-like structures of the remaining six target groups, we determine their turn-around radii and show that three out of the six targets have larger turn-around radii than the spherical bound limit predicted by Planck cosmology. We discuss possible sources of the apparent violations of the three groups, including the underestimated spherical bound limit due to the approximation of the turn-around mass by the virial mass.
Muon polarization in the MEG experiment: predictions and measurements
Baldini, A. M.; Bao, Y.; Baracchini, E.; ...
2016-04-22
The MEG experiment makes use of one of the world’s most intense low energy muon beams, in order to search for the lepton flavour violating process μ +→e +γ. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be P μ=-1 by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be P μ= -0.86 ± 0.02 (stat)more » $$+0.05\\atop{-0.06}$$ (syst) at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our μ +→e +γ search induced by the muon radiative decay: μ +→e +$$\\bar{v}$$ μν eγ.« less